00:00:00.001 Started by upstream project "autotest-per-patch" build number 121268 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 21687 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.045 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.046 The recommended git tool is: git 00:00:00.046 using credential 00000000-0000-0000-0000-000000000002 00:00:00.047 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.075 Fetching changes from the remote Git repository 00:00:00.078 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.140 Using shallow fetch with depth 1 00:00:00.140 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.140 > git --version # timeout=10 00:00:00.188 > git --version # 'git version 2.39.2' 00:00:00.188 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.191 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.191 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/39/22839/5 # timeout=5 00:00:03.467 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.478 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.489 Checking out Revision 415cb19f136d7a4a8ee08a5c51a72ee2989a84eb (FETCH_HEAD) 00:00:03.490 > git config core.sparsecheckout # timeout=10 00:00:03.503 > git read-tree -mu HEAD # timeout=10 00:00:03.519 > git checkout -f 415cb19f136d7a4a8ee08a5c51a72ee2989a84eb # timeout=5 00:00:03.541 Commit message: "jobs/autotest-upstream: Enable ASan, UBSan on all jobs" 00:00:03.541 > git rev-list --no-walk f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=10 00:00:03.627 [Pipeline] Start of Pipeline 00:00:03.639 [Pipeline] library 00:00:03.640 Loading library shm_lib@master 00:00:03.640 Library shm_lib@master is cached. Copying from home. 00:00:03.657 [Pipeline] node 00:00:18.659 Still waiting to schedule task 00:00:18.659 Waiting for next available executor on ‘vagrant-vm-host’ 00:07:15.632 Running on VM-host-WFP1 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:07:15.635 [Pipeline] { 00:07:15.649 [Pipeline] catchError 00:07:15.651 [Pipeline] { 00:07:15.668 [Pipeline] wrap 00:07:15.680 [Pipeline] { 00:07:15.689 [Pipeline] stage 00:07:15.691 [Pipeline] { (Prologue) 00:07:15.715 [Pipeline] echo 00:07:15.716 Node: VM-host-WFP1 00:07:15.724 [Pipeline] cleanWs 00:07:15.733 [WS-CLEANUP] Deleting project workspace... 00:07:15.733 [WS-CLEANUP] Deferred wipeout is used... 00:07:15.740 [WS-CLEANUP] done 00:07:15.937 [Pipeline] setCustomBuildProperty 00:07:16.013 [Pipeline] nodesByLabel 00:07:16.014 Found a total of 1 nodes with the 'sorcerer' label 00:07:16.026 [Pipeline] httpRequest 00:07:16.031 HttpMethod: GET 00:07:16.031 URL: http://10.211.164.96/packages/jbp_415cb19f136d7a4a8ee08a5c51a72ee2989a84eb.tar.gz 00:07:16.032 Sending request to url: http://10.211.164.96/packages/jbp_415cb19f136d7a4a8ee08a5c51a72ee2989a84eb.tar.gz 00:07:16.034 Response Code: HTTP/1.1 200 OK 00:07:16.034 Success: Status code 200 is in the accepted range: 200,404 00:07:16.035 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_415cb19f136d7a4a8ee08a5c51a72ee2989a84eb.tar.gz 00:07:16.180 [Pipeline] sh 00:07:16.466 + tar --no-same-owner -xf jbp_415cb19f136d7a4a8ee08a5c51a72ee2989a84eb.tar.gz 00:07:16.487 [Pipeline] httpRequest 00:07:16.493 HttpMethod: GET 00:07:16.493 URL: http://10.211.164.96/packages/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:07:16.494 Sending request to url: http://10.211.164.96/packages/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:07:16.496 Response Code: HTTP/1.1 200 OK 00:07:16.497 Success: Status code 200 is in the accepted range: 200,404 00:07:16.497 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:07:18.814 [Pipeline] sh 00:07:19.098 + tar --no-same-owner -xf spdk_8571999d826071a4793ae93dc583715f292620f7.tar.gz 00:07:21.645 [Pipeline] sh 00:07:21.925 + git -C spdk log --oneline -n5 00:07:21.925 8571999d8 test/scheduler: Stop moving all processes between cgroups 00:07:21.925 06472fb6d lib/idxd: fix batch size in kernel IDXD 00:07:21.925 44dcf4fb9 pkgdep/idxd: Add dependency for accel-config used in kernel IDXD 00:07:21.925 3dbaa93c1 nvmf: pass command dword 12 and 13 for write 00:07:21.925 19327fc3a bdev/nvme: use dtype/dspec for write commands 00:07:21.945 [Pipeline] writeFile 00:07:21.962 [Pipeline] sh 00:07:22.242 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:07:22.252 [Pipeline] sh 00:07:22.530 + cat autorun-spdk.conf 00:07:22.530 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:22.530 SPDK_TEST_NVMF=1 00:07:22.530 SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:22.530 SPDK_TEST_USDT=1 00:07:22.530 SPDK_TEST_NVMF_MDNS=1 00:07:22.530 SPDK_RUN_ASAN=1 00:07:22.530 SPDK_RUN_UBSAN=1 00:07:22.530 NET_TYPE=virt 00:07:22.530 SPDK_JSONRPC_GO_CLIENT=1 00:07:22.530 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:22.536 RUN_NIGHTLY=0 00:07:22.539 [Pipeline] } 00:07:22.555 [Pipeline] // stage 00:07:22.572 [Pipeline] stage 00:07:22.574 [Pipeline] { (Run VM) 00:07:22.588 [Pipeline] sh 00:07:22.868 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:07:22.868 + echo 'Start stage prepare_nvme.sh' 00:07:22.868 Start stage prepare_nvme.sh 00:07:22.868 + [[ -n 1 ]] 00:07:22.868 + disk_prefix=ex1 00:07:22.868 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:07:22.868 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:07:22.868 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:07:22.868 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:22.868 ++ SPDK_TEST_NVMF=1 00:07:22.868 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:22.868 ++ SPDK_TEST_USDT=1 00:07:22.868 ++ SPDK_TEST_NVMF_MDNS=1 00:07:22.868 ++ SPDK_RUN_ASAN=1 00:07:22.868 ++ SPDK_RUN_UBSAN=1 00:07:22.868 ++ NET_TYPE=virt 00:07:22.868 ++ SPDK_JSONRPC_GO_CLIENT=1 00:07:22.868 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:22.868 ++ RUN_NIGHTLY=0 00:07:22.868 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:07:22.868 + nvme_files=() 00:07:22.868 + declare -A nvme_files 00:07:22.869 + backend_dir=/var/lib/libvirt/images/backends 00:07:22.869 + nvme_files['nvme.img']=5G 00:07:22.869 + nvme_files['nvme-cmb.img']=5G 00:07:22.869 + nvme_files['nvme-multi0.img']=4G 00:07:22.869 + nvme_files['nvme-multi1.img']=4G 00:07:22.869 + nvme_files['nvme-multi2.img']=4G 00:07:22.869 + nvme_files['nvme-openstack.img']=8G 00:07:22.869 + nvme_files['nvme-zns.img']=5G 00:07:22.869 + (( SPDK_TEST_NVME_PMR == 1 )) 00:07:22.869 + (( SPDK_TEST_FTL == 1 )) 00:07:22.869 + (( SPDK_TEST_NVME_FDP == 1 )) 00:07:22.869 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:07:22.869 + for nvme in "${!nvme_files[@]}" 00:07:22.869 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:07:22.869 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:07:22.869 + for nvme in "${!nvme_files[@]}" 00:07:22.869 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:07:22.869 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:07:22.869 + for nvme in "${!nvme_files[@]}" 00:07:22.869 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:07:22.869 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:07:22.869 + for nvme in "${!nvme_files[@]}" 00:07:22.869 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:07:23.435 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:07:23.435 + for nvme in "${!nvme_files[@]}" 00:07:23.435 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:07:23.693 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:07:23.693 + for nvme in "${!nvme_files[@]}" 00:07:23.693 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:07:23.693 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:07:23.693 + for nvme in "${!nvme_files[@]}" 00:07:23.693 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:07:24.285 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:07:24.285 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:07:24.285 + echo 'End stage prepare_nvme.sh' 00:07:24.285 End stage prepare_nvme.sh 00:07:24.297 [Pipeline] sh 00:07:24.578 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:07:24.578 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora38 00:07:24.578 00:07:24.578 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:07:24.578 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:07:24.578 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:07:24.578 HELP=0 00:07:24.578 DRY_RUN=0 00:07:24.578 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:07:24.578 NVME_DISKS_TYPE=nvme,nvme, 00:07:24.578 NVME_AUTO_CREATE=0 00:07:24.578 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:07:24.578 NVME_CMB=,, 00:07:24.578 NVME_PMR=,, 00:07:24.578 NVME_ZNS=,, 00:07:24.578 NVME_MS=,, 00:07:24.578 NVME_FDP=,, 00:07:24.578 SPDK_VAGRANT_DISTRO=fedora38 00:07:24.578 SPDK_VAGRANT_VMCPU=10 00:07:24.578 SPDK_VAGRANT_VMRAM=12288 00:07:24.578 SPDK_VAGRANT_PROVIDER=libvirt 00:07:24.578 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:07:24.578 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:07:24.578 SPDK_OPENSTACK_NETWORK=0 00:07:24.578 VAGRANT_PACKAGE_BOX=0 00:07:24.578 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:07:24.578 FORCE_DISTRO=true 00:07:24.578 VAGRANT_BOX_VERSION= 00:07:24.578 EXTRA_VAGRANTFILES= 00:07:24.578 NIC_MODEL=e1000 00:07:24.578 00:07:24.578 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt' 00:07:24.578 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:07:27.111 Bringing machine 'default' up with 'libvirt' provider... 00:07:28.484 ==> default: Creating image (snapshot of base box volume). 00:07:28.484 ==> default: Creating domain with the following settings... 00:07:28.484 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1714139407_15cfe7306fa52fdcca00 00:07:28.484 ==> default: -- Domain type: kvm 00:07:28.484 ==> default: -- Cpus: 10 00:07:28.484 ==> default: -- Feature: acpi 00:07:28.484 ==> default: -- Feature: apic 00:07:28.484 ==> default: -- Feature: pae 00:07:28.484 ==> default: -- Memory: 12288M 00:07:28.484 ==> default: -- Memory Backing: hugepages: 00:07:28.484 ==> default: -- Management MAC: 00:07:28.484 ==> default: -- Loader: 00:07:28.484 ==> default: -- Nvram: 00:07:28.484 ==> default: -- Base box: spdk/fedora38 00:07:28.484 ==> default: -- Storage pool: default 00:07:28.484 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1714139407_15cfe7306fa52fdcca00.img (20G) 00:07:28.484 ==> default: -- Volume Cache: default 00:07:28.484 ==> default: -- Kernel: 00:07:28.484 ==> default: -- Initrd: 00:07:28.484 ==> default: -- Graphics Type: vnc 00:07:28.484 ==> default: -- Graphics Port: -1 00:07:28.484 ==> default: -- Graphics IP: 127.0.0.1 00:07:28.484 ==> default: -- Graphics Password: Not defined 00:07:28.484 ==> default: -- Video Type: cirrus 00:07:28.484 ==> default: -- Video VRAM: 9216 00:07:28.484 ==> default: -- Sound Type: 00:07:28.484 ==> default: -- Keymap: en-us 00:07:28.484 ==> default: -- TPM Path: 00:07:28.484 ==> default: -- INPUT: type=mouse, bus=ps2 00:07:28.484 ==> default: -- Command line args: 00:07:28.484 ==> default: -> value=-device, 00:07:28.484 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:07:28.484 ==> default: -> value=-drive, 00:07:28.484 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:07:28.484 ==> default: -> value=-device, 00:07:28.484 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:28.484 ==> default: -> value=-device, 00:07:28.484 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:07:28.484 ==> default: -> value=-drive, 00:07:28.484 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:07:28.484 ==> default: -> value=-device, 00:07:28.484 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:28.484 ==> default: -> value=-drive, 00:07:28.484 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:07:28.484 ==> default: -> value=-device, 00:07:28.484 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:28.484 ==> default: -> value=-drive, 00:07:28.484 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:07:28.484 ==> default: -> value=-device, 00:07:28.484 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:29.062 ==> default: Creating shared folders metadata... 00:07:29.062 ==> default: Starting domain. 00:07:31.604 ==> default: Waiting for domain to get an IP address... 00:07:49.706 ==> default: Waiting for SSH to become available... 00:07:50.644 ==> default: Configuring and enabling network interfaces... 00:07:55.941 default: SSH address: 192.168.121.65:22 00:07:55.941 default: SSH username: vagrant 00:07:55.941 default: SSH auth method: private key 00:07:59.222 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:08:07.404 ==> default: Mounting SSHFS shared folder... 00:08:09.306 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:08:09.306 ==> default: Checking Mount.. 00:08:10.702 ==> default: Folder Successfully Mounted! 00:08:10.702 ==> default: Running provisioner: file... 00:08:12.079 default: ~/.gitconfig => .gitconfig 00:08:12.338 00:08:12.338 SUCCESS! 00:08:12.338 00:08:12.338 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:08:12.338 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:08:12.338 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:08:12.338 00:08:12.346 [Pipeline] } 00:08:12.362 [Pipeline] // stage 00:08:12.370 [Pipeline] dir 00:08:12.371 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt 00:08:12.372 [Pipeline] { 00:08:12.384 [Pipeline] catchError 00:08:12.386 [Pipeline] { 00:08:12.400 [Pipeline] sh 00:08:12.681 + vagrant ssh-config --host vagrant 00:08:12.681 + + tee ssh_conf 00:08:12.681 sed -ne /^Host/,$p 00:08:15.966 Host vagrant 00:08:15.966 HostName 192.168.121.65 00:08:15.966 User vagrant 00:08:15.966 Port 22 00:08:15.966 UserKnownHostsFile /dev/null 00:08:15.966 StrictHostKeyChecking no 00:08:15.966 PasswordAuthentication no 00:08:15.966 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:08:15.966 IdentitiesOnly yes 00:08:15.966 LogLevel FATAL 00:08:15.966 ForwardAgent yes 00:08:15.966 ForwardX11 yes 00:08:15.966 00:08:15.982 [Pipeline] withEnv 00:08:15.984 [Pipeline] { 00:08:15.999 [Pipeline] sh 00:08:16.276 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:08:16.276 source /etc/os-release 00:08:16.276 [[ -e /image.version ]] && img=$(< /image.version) 00:08:16.276 # Minimal, systemd-like check. 00:08:16.276 if [[ -e /.dockerenv ]]; then 00:08:16.276 # Clear garbage from the node's name: 00:08:16.276 # agt-er_autotest_547-896 -> autotest_547-896 00:08:16.276 # $HOSTNAME is the actual container id 00:08:16.276 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:08:16.276 if mountpoint -q /etc/hostname; then 00:08:16.276 # We can assume this is a mount from a host where container is running, 00:08:16.276 # so fetch its hostname to easily identify the target swarm worker. 00:08:16.276 container="$(< /etc/hostname) ($agent)" 00:08:16.276 else 00:08:16.276 # Fallback 00:08:16.276 container=$agent 00:08:16.276 fi 00:08:16.276 fi 00:08:16.276 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:08:16.276 00:08:16.544 [Pipeline] } 00:08:16.563 [Pipeline] // withEnv 00:08:16.571 [Pipeline] setCustomBuildProperty 00:08:16.585 [Pipeline] stage 00:08:16.588 [Pipeline] { (Tests) 00:08:16.607 [Pipeline] sh 00:08:16.885 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:08:17.158 [Pipeline] timeout 00:08:17.158 Timeout set to expire in 40 min 00:08:17.160 [Pipeline] { 00:08:17.177 [Pipeline] sh 00:08:17.464 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:08:18.060 HEAD is now at 8571999d8 test/scheduler: Stop moving all processes between cgroups 00:08:18.072 [Pipeline] sh 00:08:18.351 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:08:18.624 [Pipeline] sh 00:08:18.904 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:08:19.177 [Pipeline] sh 00:08:19.456 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:08:19.716 ++ readlink -f spdk_repo 00:08:19.716 + DIR_ROOT=/home/vagrant/spdk_repo 00:08:19.716 + [[ -n /home/vagrant/spdk_repo ]] 00:08:19.716 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:08:19.716 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:08:19.716 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:08:19.716 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:08:19.716 + [[ -d /home/vagrant/spdk_repo/output ]] 00:08:19.716 + cd /home/vagrant/spdk_repo 00:08:19.716 + source /etc/os-release 00:08:19.716 ++ NAME='Fedora Linux' 00:08:19.716 ++ VERSION='38 (Cloud Edition)' 00:08:19.716 ++ ID=fedora 00:08:19.716 ++ VERSION_ID=38 00:08:19.716 ++ VERSION_CODENAME= 00:08:19.716 ++ PLATFORM_ID=platform:f38 00:08:19.716 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:08:19.716 ++ ANSI_COLOR='0;38;2;60;110;180' 00:08:19.716 ++ LOGO=fedora-logo-icon 00:08:19.716 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:08:19.716 ++ HOME_URL=https://fedoraproject.org/ 00:08:19.716 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:08:19.716 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:08:19.716 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:08:19.716 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:08:19.716 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:08:19.716 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:08:19.716 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:08:19.716 ++ SUPPORT_END=2024-05-14 00:08:19.716 ++ VARIANT='Cloud Edition' 00:08:19.716 ++ VARIANT_ID=cloud 00:08:19.716 + uname -a 00:08:19.716 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:08:19.716 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:20.284 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:20.284 Hugepages 00:08:20.284 node hugesize free / total 00:08:20.284 node0 1048576kB 0 / 0 00:08:20.284 node0 2048kB 0 / 0 00:08:20.284 00:08:20.284 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:20.284 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:20.284 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:20.284 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:20.284 + rm -f /tmp/spdk-ld-path 00:08:20.284 + source autorun-spdk.conf 00:08:20.284 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:20.284 ++ SPDK_TEST_NVMF=1 00:08:20.284 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:20.284 ++ SPDK_TEST_USDT=1 00:08:20.284 ++ SPDK_TEST_NVMF_MDNS=1 00:08:20.284 ++ SPDK_RUN_ASAN=1 00:08:20.284 ++ SPDK_RUN_UBSAN=1 00:08:20.284 ++ NET_TYPE=virt 00:08:20.284 ++ SPDK_JSONRPC_GO_CLIENT=1 00:08:20.284 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:20.284 ++ RUN_NIGHTLY=0 00:08:20.284 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:08:20.284 + [[ -n '' ]] 00:08:20.284 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:08:20.284 + for M in /var/spdk/build-*-manifest.txt 00:08:20.284 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:08:20.284 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:20.284 + for M in /var/spdk/build-*-manifest.txt 00:08:20.284 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:08:20.284 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:20.284 ++ uname 00:08:20.284 + [[ Linux == \L\i\n\u\x ]] 00:08:20.284 + sudo dmesg -T 00:08:20.543 + sudo dmesg --clear 00:08:20.543 + dmesg_pid=5103 00:08:20.543 + [[ Fedora Linux == FreeBSD ]] 00:08:20.543 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:20.543 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:20.543 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:08:20.544 + [[ -x /usr/src/fio-static/fio ]] 00:08:20.544 + sudo dmesg -Tw 00:08:20.544 + export FIO_BIN=/usr/src/fio-static/fio 00:08:20.544 + FIO_BIN=/usr/src/fio-static/fio 00:08:20.544 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:08:20.544 + [[ ! -v VFIO_QEMU_BIN ]] 00:08:20.544 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:08:20.544 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:20.544 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:20.544 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:08:20.544 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:20.544 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:20.544 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:20.544 Test configuration: 00:08:20.544 SPDK_RUN_FUNCTIONAL_TEST=1 00:08:20.544 SPDK_TEST_NVMF=1 00:08:20.544 SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:20.544 SPDK_TEST_USDT=1 00:08:20.544 SPDK_TEST_NVMF_MDNS=1 00:08:20.544 SPDK_RUN_ASAN=1 00:08:20.544 SPDK_RUN_UBSAN=1 00:08:20.544 NET_TYPE=virt 00:08:20.544 SPDK_JSONRPC_GO_CLIENT=1 00:08:20.544 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:20.544 RUN_NIGHTLY=0 13:51:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:20.544 13:51:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:20.544 13:51:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.544 13:51:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.544 13:51:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.544 13:51:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.544 13:51:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.544 13:51:00 -- paths/export.sh@5 -- $ export PATH 00:08:20.544 13:51:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.544 13:51:00 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:08:20.544 13:51:00 -- common/autobuild_common.sh@435 -- $ date +%s 00:08:20.544 13:51:00 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714139460.XXXXXX 00:08:20.544 13:51:00 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714139460.FTTamN 00:08:20.544 13:51:00 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:08:20.544 13:51:00 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:08:20.544 13:51:00 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:08:20.544 13:51:00 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:08:20.544 13:51:00 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:08:20.544 13:51:00 -- common/autobuild_common.sh@451 -- $ get_config_params 00:08:20.544 13:51:00 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:08:20.544 13:51:00 -- common/autotest_common.sh@10 -- $ set +x 00:08:20.544 13:51:00 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-avahi --with-golang' 00:08:20.544 13:51:00 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:08:20.544 13:51:00 -- pm/common@17 -- $ local monitor 00:08:20.544 13:51:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:20.544 13:51:00 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5137 00:08:20.544 13:51:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:20.544 13:51:00 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5138 00:08:20.544 13:51:00 -- pm/common@26 -- $ sleep 1 00:08:20.544 13:51:00 -- pm/common@21 -- $ date +%s 00:08:20.544 13:51:00 -- pm/common@21 -- $ date +%s 00:08:20.544 13:51:00 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1714139460 00:08:20.544 13:51:00 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1714139460 00:08:20.803 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1714139460_collect-vmstat.pm.log 00:08:20.803 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1714139460_collect-cpu-load.pm.log 00:08:21.741 13:51:01 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:08:21.741 13:51:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:08:21.741 13:51:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:08:21.741 13:51:01 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:08:21.741 13:51:01 -- spdk/autobuild.sh@16 -- $ date -u 00:08:21.741 Fri Apr 26 01:51:01 PM UTC 2024 00:08:21.741 13:51:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:08:21.741 v24.05-pre-449-g8571999d8 00:08:21.741 13:51:01 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:08:21.741 13:51:01 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:08:21.741 13:51:01 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:08:21.742 13:51:01 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:08:21.742 13:51:01 -- common/autotest_common.sh@10 -- $ set +x 00:08:21.742 ************************************ 00:08:21.742 START TEST asan 00:08:21.742 ************************************ 00:08:21.742 using asan 00:08:21.742 13:51:01 -- common/autotest_common.sh@1111 -- $ echo 'using asan' 00:08:21.742 00:08:21.742 real 0m0.001s 00:08:21.742 user 0m0.001s 00:08:21.742 sys 0m0.000s 00:08:21.742 13:51:01 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:08:21.742 ************************************ 00:08:21.742 END TEST asan 00:08:21.742 ************************************ 00:08:21.742 13:51:01 -- common/autotest_common.sh@10 -- $ set +x 00:08:21.742 13:51:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:08:21.742 13:51:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:08:21.742 13:51:01 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:08:21.742 13:51:01 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:08:21.742 13:51:01 -- common/autotest_common.sh@10 -- $ set +x 00:08:22.001 ************************************ 00:08:22.001 START TEST ubsan 00:08:22.001 ************************************ 00:08:22.001 using ubsan 00:08:22.001 13:51:01 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:08:22.001 00:08:22.001 real 0m0.000s 00:08:22.001 user 0m0.000s 00:08:22.001 sys 0m0.000s 00:08:22.001 13:51:01 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:08:22.001 ************************************ 00:08:22.001 END TEST ubsan 00:08:22.001 13:51:01 -- common/autotest_common.sh@10 -- $ set +x 00:08:22.001 ************************************ 00:08:22.001 13:51:01 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:08:22.001 13:51:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:08:22.001 13:51:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:08:22.001 13:51:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:08:22.001 13:51:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:08:22.001 13:51:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:08:22.001 13:51:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:08:22.001 13:51:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:08:22.001 13:51:01 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:08:22.001 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:22.001 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:08:22.569 Using 'verbs' RDMA provider 00:08:41.592 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:08:53.793 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:08:54.051 go version go1.21.1 linux/amd64 00:08:54.620 Creating mk/config.mk...done. 00:08:54.620 Creating mk/cc.flags.mk...done. 00:08:54.620 Type 'make' to build. 00:08:54.620 13:51:33 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:08:54.620 13:51:33 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:08:54.620 13:51:33 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:08:54.620 13:51:33 -- common/autotest_common.sh@10 -- $ set +x 00:08:54.620 ************************************ 00:08:54.620 START TEST make 00:08:54.620 ************************************ 00:08:54.620 13:51:34 -- common/autotest_common.sh@1111 -- $ make -j10 00:08:54.878 make[1]: Nothing to be done for 'all'. 00:09:09.770 The Meson build system 00:09:09.770 Version: 1.3.1 00:09:09.770 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:09:09.770 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:09:09.770 Build type: native build 00:09:09.770 Program cat found: YES (/usr/bin/cat) 00:09:09.770 Project name: DPDK 00:09:09.770 Project version: 23.11.0 00:09:09.770 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:09:09.770 C linker for the host machine: cc ld.bfd 2.39-16 00:09:09.770 Host machine cpu family: x86_64 00:09:09.770 Host machine cpu: x86_64 00:09:09.770 Message: ## Building in Developer Mode ## 00:09:09.770 Program pkg-config found: YES (/usr/bin/pkg-config) 00:09:09.770 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:09:09.770 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:09:09.770 Program python3 found: YES (/usr/bin/python3) 00:09:09.770 Program cat found: YES (/usr/bin/cat) 00:09:09.770 Compiler for C supports arguments -march=native: YES 00:09:09.770 Checking for size of "void *" : 8 00:09:09.770 Checking for size of "void *" : 8 (cached) 00:09:09.770 Library m found: YES 00:09:09.770 Library numa found: YES 00:09:09.770 Has header "numaif.h" : YES 00:09:09.770 Library fdt found: NO 00:09:09.770 Library execinfo found: NO 00:09:09.770 Has header "execinfo.h" : YES 00:09:09.770 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:09:09.770 Run-time dependency libarchive found: NO (tried pkgconfig) 00:09:09.770 Run-time dependency libbsd found: NO (tried pkgconfig) 00:09:09.770 Run-time dependency jansson found: NO (tried pkgconfig) 00:09:09.770 Run-time dependency openssl found: YES 3.0.9 00:09:09.770 Run-time dependency libpcap found: YES 1.10.4 00:09:09.770 Has header "pcap.h" with dependency libpcap: YES 00:09:09.770 Compiler for C supports arguments -Wcast-qual: YES 00:09:09.770 Compiler for C supports arguments -Wdeprecated: YES 00:09:09.770 Compiler for C supports arguments -Wformat: YES 00:09:09.770 Compiler for C supports arguments -Wformat-nonliteral: NO 00:09:09.770 Compiler for C supports arguments -Wformat-security: NO 00:09:09.770 Compiler for C supports arguments -Wmissing-declarations: YES 00:09:09.770 Compiler for C supports arguments -Wmissing-prototypes: YES 00:09:09.770 Compiler for C supports arguments -Wnested-externs: YES 00:09:09.770 Compiler for C supports arguments -Wold-style-definition: YES 00:09:09.770 Compiler for C supports arguments -Wpointer-arith: YES 00:09:09.770 Compiler for C supports arguments -Wsign-compare: YES 00:09:09.770 Compiler for C supports arguments -Wstrict-prototypes: YES 00:09:09.770 Compiler for C supports arguments -Wundef: YES 00:09:09.770 Compiler for C supports arguments -Wwrite-strings: YES 00:09:09.770 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:09:09.770 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:09:09.770 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:09:09.770 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:09:09.770 Program objdump found: YES (/usr/bin/objdump) 00:09:09.770 Compiler for C supports arguments -mavx512f: YES 00:09:09.770 Checking if "AVX512 checking" compiles: YES 00:09:09.770 Fetching value of define "__SSE4_2__" : 1 00:09:09.770 Fetching value of define "__AES__" : 1 00:09:09.770 Fetching value of define "__AVX__" : 1 00:09:09.770 Fetching value of define "__AVX2__" : 1 00:09:09.770 Fetching value of define "__AVX512BW__" : 1 00:09:09.770 Fetching value of define "__AVX512CD__" : 1 00:09:09.770 Fetching value of define "__AVX512DQ__" : 1 00:09:09.770 Fetching value of define "__AVX512F__" : 1 00:09:09.770 Fetching value of define "__AVX512VL__" : 1 00:09:09.770 Fetching value of define "__PCLMUL__" : 1 00:09:09.770 Fetching value of define "__RDRND__" : 1 00:09:09.770 Fetching value of define "__RDSEED__" : 1 00:09:09.770 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:09:09.770 Fetching value of define "__znver1__" : (undefined) 00:09:09.770 Fetching value of define "__znver2__" : (undefined) 00:09:09.770 Fetching value of define "__znver3__" : (undefined) 00:09:09.770 Fetching value of define "__znver4__" : (undefined) 00:09:09.770 Library asan found: YES 00:09:09.770 Compiler for C supports arguments -Wno-format-truncation: YES 00:09:09.770 Message: lib/log: Defining dependency "log" 00:09:09.770 Message: lib/kvargs: Defining dependency "kvargs" 00:09:09.770 Message: lib/telemetry: Defining dependency "telemetry" 00:09:09.770 Library rt found: YES 00:09:09.770 Checking for function "getentropy" : NO 00:09:09.770 Message: lib/eal: Defining dependency "eal" 00:09:09.770 Message: lib/ring: Defining dependency "ring" 00:09:09.770 Message: lib/rcu: Defining dependency "rcu" 00:09:09.770 Message: lib/mempool: Defining dependency "mempool" 00:09:09.770 Message: lib/mbuf: Defining dependency "mbuf" 00:09:09.770 Fetching value of define "__PCLMUL__" : 1 (cached) 00:09:09.770 Fetching value of define "__AVX512F__" : 1 (cached) 00:09:09.770 Fetching value of define "__AVX512BW__" : 1 (cached) 00:09:09.770 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:09:09.770 Fetching value of define "__AVX512VL__" : 1 (cached) 00:09:09.770 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:09:09.770 Compiler for C supports arguments -mpclmul: YES 00:09:09.770 Compiler for C supports arguments -maes: YES 00:09:09.770 Compiler for C supports arguments -mavx512f: YES (cached) 00:09:09.770 Compiler for C supports arguments -mavx512bw: YES 00:09:09.770 Compiler for C supports arguments -mavx512dq: YES 00:09:09.770 Compiler for C supports arguments -mavx512vl: YES 00:09:09.770 Compiler for C supports arguments -mvpclmulqdq: YES 00:09:09.770 Compiler for C supports arguments -mavx2: YES 00:09:09.770 Compiler for C supports arguments -mavx: YES 00:09:09.770 Message: lib/net: Defining dependency "net" 00:09:09.770 Message: lib/meter: Defining dependency "meter" 00:09:09.770 Message: lib/ethdev: Defining dependency "ethdev" 00:09:09.770 Message: lib/pci: Defining dependency "pci" 00:09:09.770 Message: lib/cmdline: Defining dependency "cmdline" 00:09:09.770 Message: lib/hash: Defining dependency "hash" 00:09:09.770 Message: lib/timer: Defining dependency "timer" 00:09:09.770 Message: lib/compressdev: Defining dependency "compressdev" 00:09:09.770 Message: lib/cryptodev: Defining dependency "cryptodev" 00:09:09.770 Message: lib/dmadev: Defining dependency "dmadev" 00:09:09.770 Compiler for C supports arguments -Wno-cast-qual: YES 00:09:09.770 Message: lib/power: Defining dependency "power" 00:09:09.770 Message: lib/reorder: Defining dependency "reorder" 00:09:09.770 Message: lib/security: Defining dependency "security" 00:09:09.770 Has header "linux/userfaultfd.h" : YES 00:09:09.770 Has header "linux/vduse.h" : YES 00:09:09.770 Message: lib/vhost: Defining dependency "vhost" 00:09:09.770 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:09:09.770 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:09:09.770 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:09:09.770 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:09:09.770 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:09:09.770 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:09:09.770 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:09:09.770 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:09:09.770 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:09:09.770 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:09:09.770 Program doxygen found: YES (/usr/bin/doxygen) 00:09:09.770 Configuring doxy-api-html.conf using configuration 00:09:09.770 Configuring doxy-api-man.conf using configuration 00:09:09.770 Program mandb found: YES (/usr/bin/mandb) 00:09:09.770 Program sphinx-build found: NO 00:09:09.770 Configuring rte_build_config.h using configuration 00:09:09.770 Message: 00:09:09.770 ================= 00:09:09.770 Applications Enabled 00:09:09.770 ================= 00:09:09.770 00:09:09.770 apps: 00:09:09.770 00:09:09.770 00:09:09.771 Message: 00:09:09.771 ================= 00:09:09.771 Libraries Enabled 00:09:09.771 ================= 00:09:09.771 00:09:09.771 libs: 00:09:09.771 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:09:09.771 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:09:09.771 cryptodev, dmadev, power, reorder, security, vhost, 00:09:09.771 00:09:09.771 Message: 00:09:09.771 =============== 00:09:09.771 Drivers Enabled 00:09:09.771 =============== 00:09:09.771 00:09:09.771 common: 00:09:09.771 00:09:09.771 bus: 00:09:09.771 pci, vdev, 00:09:09.771 mempool: 00:09:09.771 ring, 00:09:09.771 dma: 00:09:09.771 00:09:09.771 net: 00:09:09.771 00:09:09.771 crypto: 00:09:09.771 00:09:09.771 compress: 00:09:09.771 00:09:09.771 vdpa: 00:09:09.771 00:09:09.771 00:09:09.771 Message: 00:09:09.771 ================= 00:09:09.771 Content Skipped 00:09:09.771 ================= 00:09:09.771 00:09:09.771 apps: 00:09:09.771 dumpcap: explicitly disabled via build config 00:09:09.771 graph: explicitly disabled via build config 00:09:09.771 pdump: explicitly disabled via build config 00:09:09.771 proc-info: explicitly disabled via build config 00:09:09.771 test-acl: explicitly disabled via build config 00:09:09.771 test-bbdev: explicitly disabled via build config 00:09:09.771 test-cmdline: explicitly disabled via build config 00:09:09.771 test-compress-perf: explicitly disabled via build config 00:09:09.771 test-crypto-perf: explicitly disabled via build config 00:09:09.771 test-dma-perf: explicitly disabled via build config 00:09:09.771 test-eventdev: explicitly disabled via build config 00:09:09.771 test-fib: explicitly disabled via build config 00:09:09.771 test-flow-perf: explicitly disabled via build config 00:09:09.771 test-gpudev: explicitly disabled via build config 00:09:09.771 test-mldev: explicitly disabled via build config 00:09:09.771 test-pipeline: explicitly disabled via build config 00:09:09.771 test-pmd: explicitly disabled via build config 00:09:09.771 test-regex: explicitly disabled via build config 00:09:09.771 test-sad: explicitly disabled via build config 00:09:09.771 test-security-perf: explicitly disabled via build config 00:09:09.771 00:09:09.771 libs: 00:09:09.771 metrics: explicitly disabled via build config 00:09:09.771 acl: explicitly disabled via build config 00:09:09.771 bbdev: explicitly disabled via build config 00:09:09.771 bitratestats: explicitly disabled via build config 00:09:09.771 bpf: explicitly disabled via build config 00:09:09.771 cfgfile: explicitly disabled via build config 00:09:09.771 distributor: explicitly disabled via build config 00:09:09.771 efd: explicitly disabled via build config 00:09:09.771 eventdev: explicitly disabled via build config 00:09:09.771 dispatcher: explicitly disabled via build config 00:09:09.771 gpudev: explicitly disabled via build config 00:09:09.771 gro: explicitly disabled via build config 00:09:09.771 gso: explicitly disabled via build config 00:09:09.771 ip_frag: explicitly disabled via build config 00:09:09.771 jobstats: explicitly disabled via build config 00:09:09.771 latencystats: explicitly disabled via build config 00:09:09.771 lpm: explicitly disabled via build config 00:09:09.771 member: explicitly disabled via build config 00:09:09.771 pcapng: explicitly disabled via build config 00:09:09.771 rawdev: explicitly disabled via build config 00:09:09.771 regexdev: explicitly disabled via build config 00:09:09.771 mldev: explicitly disabled via build config 00:09:09.771 rib: explicitly disabled via build config 00:09:09.771 sched: explicitly disabled via build config 00:09:09.771 stack: explicitly disabled via build config 00:09:09.771 ipsec: explicitly disabled via build config 00:09:09.771 pdcp: explicitly disabled via build config 00:09:09.771 fib: explicitly disabled via build config 00:09:09.771 port: explicitly disabled via build config 00:09:09.771 pdump: explicitly disabled via build config 00:09:09.771 table: explicitly disabled via build config 00:09:09.771 pipeline: explicitly disabled via build config 00:09:09.771 graph: explicitly disabled via build config 00:09:09.771 node: explicitly disabled via build config 00:09:09.771 00:09:09.771 drivers: 00:09:09.771 common/cpt: not in enabled drivers build config 00:09:09.771 common/dpaax: not in enabled drivers build config 00:09:09.771 common/iavf: not in enabled drivers build config 00:09:09.771 common/idpf: not in enabled drivers build config 00:09:09.771 common/mvep: not in enabled drivers build config 00:09:09.771 common/octeontx: not in enabled drivers build config 00:09:09.771 bus/auxiliary: not in enabled drivers build config 00:09:09.771 bus/cdx: not in enabled drivers build config 00:09:09.771 bus/dpaa: not in enabled drivers build config 00:09:09.771 bus/fslmc: not in enabled drivers build config 00:09:09.771 bus/ifpga: not in enabled drivers build config 00:09:09.771 bus/platform: not in enabled drivers build config 00:09:09.771 bus/vmbus: not in enabled drivers build config 00:09:09.771 common/cnxk: not in enabled drivers build config 00:09:09.771 common/mlx5: not in enabled drivers build config 00:09:09.771 common/nfp: not in enabled drivers build config 00:09:09.771 common/qat: not in enabled drivers build config 00:09:09.771 common/sfc_efx: not in enabled drivers build config 00:09:09.771 mempool/bucket: not in enabled drivers build config 00:09:09.771 mempool/cnxk: not in enabled drivers build config 00:09:09.771 mempool/dpaa: not in enabled drivers build config 00:09:09.771 mempool/dpaa2: not in enabled drivers build config 00:09:09.771 mempool/octeontx: not in enabled drivers build config 00:09:09.771 mempool/stack: not in enabled drivers build config 00:09:09.771 dma/cnxk: not in enabled drivers build config 00:09:09.771 dma/dpaa: not in enabled drivers build config 00:09:09.771 dma/dpaa2: not in enabled drivers build config 00:09:09.771 dma/hisilicon: not in enabled drivers build config 00:09:09.771 dma/idxd: not in enabled drivers build config 00:09:09.771 dma/ioat: not in enabled drivers build config 00:09:09.771 dma/skeleton: not in enabled drivers build config 00:09:09.771 net/af_packet: not in enabled drivers build config 00:09:09.771 net/af_xdp: not in enabled drivers build config 00:09:09.771 net/ark: not in enabled drivers build config 00:09:09.771 net/atlantic: not in enabled drivers build config 00:09:09.771 net/avp: not in enabled drivers build config 00:09:09.771 net/axgbe: not in enabled drivers build config 00:09:09.771 net/bnx2x: not in enabled drivers build config 00:09:09.771 net/bnxt: not in enabled drivers build config 00:09:09.771 net/bonding: not in enabled drivers build config 00:09:09.771 net/cnxk: not in enabled drivers build config 00:09:09.771 net/cpfl: not in enabled drivers build config 00:09:09.771 net/cxgbe: not in enabled drivers build config 00:09:09.771 net/dpaa: not in enabled drivers build config 00:09:09.771 net/dpaa2: not in enabled drivers build config 00:09:09.771 net/e1000: not in enabled drivers build config 00:09:09.771 net/ena: not in enabled drivers build config 00:09:09.771 net/enetc: not in enabled drivers build config 00:09:09.771 net/enetfec: not in enabled drivers build config 00:09:09.771 net/enic: not in enabled drivers build config 00:09:09.771 net/failsafe: not in enabled drivers build config 00:09:09.771 net/fm10k: not in enabled drivers build config 00:09:09.771 net/gve: not in enabled drivers build config 00:09:09.771 net/hinic: not in enabled drivers build config 00:09:09.771 net/hns3: not in enabled drivers build config 00:09:09.771 net/i40e: not in enabled drivers build config 00:09:09.771 net/iavf: not in enabled drivers build config 00:09:09.771 net/ice: not in enabled drivers build config 00:09:09.771 net/idpf: not in enabled drivers build config 00:09:09.771 net/igc: not in enabled drivers build config 00:09:09.771 net/ionic: not in enabled drivers build config 00:09:09.771 net/ipn3ke: not in enabled drivers build config 00:09:09.771 net/ixgbe: not in enabled drivers build config 00:09:09.771 net/mana: not in enabled drivers build config 00:09:09.771 net/memif: not in enabled drivers build config 00:09:09.771 net/mlx4: not in enabled drivers build config 00:09:09.771 net/mlx5: not in enabled drivers build config 00:09:09.771 net/mvneta: not in enabled drivers build config 00:09:09.771 net/mvpp2: not in enabled drivers build config 00:09:09.771 net/netvsc: not in enabled drivers build config 00:09:09.771 net/nfb: not in enabled drivers build config 00:09:09.771 net/nfp: not in enabled drivers build config 00:09:09.771 net/ngbe: not in enabled drivers build config 00:09:09.771 net/null: not in enabled drivers build config 00:09:09.771 net/octeontx: not in enabled drivers build config 00:09:09.771 net/octeon_ep: not in enabled drivers build config 00:09:09.771 net/pcap: not in enabled drivers build config 00:09:09.771 net/pfe: not in enabled drivers build config 00:09:09.771 net/qede: not in enabled drivers build config 00:09:09.771 net/ring: not in enabled drivers build config 00:09:09.771 net/sfc: not in enabled drivers build config 00:09:09.771 net/softnic: not in enabled drivers build config 00:09:09.771 net/tap: not in enabled drivers build config 00:09:09.771 net/thunderx: not in enabled drivers build config 00:09:09.771 net/txgbe: not in enabled drivers build config 00:09:09.771 net/vdev_netvsc: not in enabled drivers build config 00:09:09.771 net/vhost: not in enabled drivers build config 00:09:09.771 net/virtio: not in enabled drivers build config 00:09:09.771 net/vmxnet3: not in enabled drivers build config 00:09:09.771 raw/*: missing internal dependency, "rawdev" 00:09:09.771 crypto/armv8: not in enabled drivers build config 00:09:09.771 crypto/bcmfs: not in enabled drivers build config 00:09:09.771 crypto/caam_jr: not in enabled drivers build config 00:09:09.771 crypto/ccp: not in enabled drivers build config 00:09:09.771 crypto/cnxk: not in enabled drivers build config 00:09:09.771 crypto/dpaa_sec: not in enabled drivers build config 00:09:09.771 crypto/dpaa2_sec: not in enabled drivers build config 00:09:09.771 crypto/ipsec_mb: not in enabled drivers build config 00:09:09.771 crypto/mlx5: not in enabled drivers build config 00:09:09.771 crypto/mvsam: not in enabled drivers build config 00:09:09.771 crypto/nitrox: not in enabled drivers build config 00:09:09.771 crypto/null: not in enabled drivers build config 00:09:09.771 crypto/octeontx: not in enabled drivers build config 00:09:09.771 crypto/openssl: not in enabled drivers build config 00:09:09.771 crypto/scheduler: not in enabled drivers build config 00:09:09.771 crypto/uadk: not in enabled drivers build config 00:09:09.771 crypto/virtio: not in enabled drivers build config 00:09:09.771 compress/isal: not in enabled drivers build config 00:09:09.771 compress/mlx5: not in enabled drivers build config 00:09:09.771 compress/octeontx: not in enabled drivers build config 00:09:09.771 compress/zlib: not in enabled drivers build config 00:09:09.771 regex/*: missing internal dependency, "regexdev" 00:09:09.771 ml/*: missing internal dependency, "mldev" 00:09:09.771 vdpa/ifc: not in enabled drivers build config 00:09:09.771 vdpa/mlx5: not in enabled drivers build config 00:09:09.771 vdpa/nfp: not in enabled drivers build config 00:09:09.771 vdpa/sfc: not in enabled drivers build config 00:09:09.771 event/*: missing internal dependency, "eventdev" 00:09:09.771 baseband/*: missing internal dependency, "bbdev" 00:09:09.771 gpu/*: missing internal dependency, "gpudev" 00:09:09.771 00:09:09.771 00:09:09.771 Build targets in project: 85 00:09:09.771 00:09:09.771 DPDK 23.11.0 00:09:09.771 00:09:09.771 User defined options 00:09:09.771 buildtype : debug 00:09:09.771 default_library : shared 00:09:09.771 libdir : lib 00:09:09.771 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:09:09.771 b_sanitize : address 00:09:09.771 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:09:09.771 c_link_args : 00:09:09.771 cpu_instruction_set: native 00:09:09.771 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:09:09.771 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:09:09.771 enable_docs : false 00:09:09.771 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:09:09.771 enable_kmods : false 00:09:09.771 tests : false 00:09:09.771 00:09:09.771 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:09:09.771 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:09:09.771 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:09:09.771 [2/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:09:09.771 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:09:09.771 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:09:09.771 [5/265] Linking static target lib/librte_kvargs.a 00:09:09.771 [6/265] Linking static target lib/librte_log.a 00:09:09.771 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:09:09.771 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:09:09.771 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:09:09.771 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:09:09.771 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:09:09.771 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:09:09.771 [13/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:09:09.771 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:09:09.771 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:09:09.771 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:09:09.772 [17/265] Linking static target lib/librte_telemetry.a 00:09:09.772 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:09:09.772 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:09:09.772 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:09:09.772 [21/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:09:09.772 [22/265] Linking target lib/librte_log.so.24.0 00:09:09.772 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:09:09.772 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:09:09.772 [25/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:09:09.772 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:09:09.772 [27/265] Linking target lib/librte_kvargs.so.24.0 00:09:09.772 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:09:09.772 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:09:09.772 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:09:09.772 [31/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:09:09.772 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:09:09.772 [33/265] Linking target lib/librte_telemetry.so.24.0 00:09:09.772 [34/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:09:09.772 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:09:09.772 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:09:09.772 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:09:10.031 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:09:10.031 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:09:10.031 [40/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:09:10.031 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:09:10.031 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:09:10.031 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:09:10.031 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:09:10.289 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:09:10.289 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:09:10.289 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:09:10.289 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:09:10.548 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:09:10.548 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:09:10.548 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:09:10.548 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:09:10.548 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:09:10.548 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:09:10.548 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:09:10.548 [56/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:09:10.548 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:09:10.806 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:09:10.806 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:09:10.806 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:09:10.806 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:09:10.806 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:09:10.806 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:09:10.806 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:09:10.806 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:09:11.065 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:09:11.065 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:09:11.065 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:09:11.065 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:09:11.065 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:09:11.065 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:09:11.324 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:09:11.324 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:09:11.324 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:09:11.324 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:09:11.324 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:09:11.324 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:09:11.324 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:09:11.324 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:09:11.583 [80/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:09:11.583 [81/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:09:11.842 [82/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:09:11.842 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:09:11.842 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:09:11.842 [85/265] Linking static target lib/librte_ring.a 00:09:11.842 [86/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:09:11.842 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:09:11.842 [88/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:09:11.842 [89/265] Linking static target lib/librte_rcu.a 00:09:12.101 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:09:12.101 [91/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:09:12.101 [92/265] Linking static target lib/librte_eal.a 00:09:12.101 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:09:12.101 [94/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:09:12.101 [95/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:09:12.101 [96/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:09:12.360 [97/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:09:12.360 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:09:12.360 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:09:12.360 [100/265] Linking static target lib/librte_mempool.a 00:09:12.360 [101/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:09:12.360 [102/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:09:12.360 [103/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:09:12.618 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:09:12.876 [105/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:09:12.876 [106/265] Linking static target lib/librte_mbuf.a 00:09:12.876 [107/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:09:12.876 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:09:12.876 [109/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:09:12.876 [110/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:09:12.876 [111/265] Linking static target lib/librte_net.a 00:09:12.876 [112/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:09:12.876 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:09:12.876 [114/265] Linking static target lib/librte_meter.a 00:09:13.442 [115/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:09:13.442 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:09:13.442 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:09:13.442 [118/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:09:13.442 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:09:13.442 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:09:13.717 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:09:13.717 [122/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:09:13.987 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:09:13.987 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:09:13.987 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:09:13.987 [126/265] Linking static target lib/librte_pci.a 00:09:13.987 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:09:13.987 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:09:14.256 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:09:14.256 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:09:14.256 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:09:14.256 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:09:14.256 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:09:14.256 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:09:14.256 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:09:14.256 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:09:14.524 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:09:14.524 [138/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:09:14.524 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:09:14.524 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:09:14.524 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:09:14.524 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:09:14.524 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:09:14.524 [144/265] Linking static target lib/librte_cmdline.a 00:09:14.524 [145/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:09:14.782 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:09:15.042 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:09:15.042 [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:09:15.042 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:09:15.042 [150/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:09:15.042 [151/265] Linking static target lib/librte_timer.a 00:09:15.301 [152/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:09:15.301 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:09:15.301 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:09:15.301 [155/265] Linking static target lib/librte_ethdev.a 00:09:15.301 [156/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:09:15.301 [157/265] Linking static target lib/librte_compressdev.a 00:09:15.561 [158/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:09:15.561 [159/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:09:15.561 [160/265] Linking static target lib/librte_hash.a 00:09:15.561 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:09:15.561 [162/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:09:15.561 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:09:15.820 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:09:15.820 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:09:15.820 [166/265] Linking static target lib/librte_dmadev.a 00:09:15.820 [167/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:09:16.079 [168/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:09:16.079 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:09:16.079 [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:09:16.079 [171/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:16.079 [172/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:09:16.338 [173/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:09:16.338 [174/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:16.338 [175/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:09:16.338 [176/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:09:16.338 [177/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:09:16.338 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:09:16.338 [179/265] Linking static target lib/librte_cryptodev.a 00:09:16.338 [180/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:09:16.338 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:09:16.597 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:09:16.597 [183/265] Linking static target lib/librte_power.a 00:09:16.857 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:09:16.857 [185/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:09:16.857 [186/265] Linking static target lib/librte_reorder.a 00:09:16.857 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:09:16.857 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:09:16.857 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:09:16.857 [190/265] Linking static target lib/librte_security.a 00:09:17.117 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:09:17.117 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:09:17.376 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:09:17.376 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:09:17.376 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:09:17.635 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:09:17.635 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:09:17.635 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:09:17.635 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:09:17.894 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:09:17.894 [201/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:17.894 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:09:17.894 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:09:17.894 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:09:18.154 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:09:18.154 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:09:18.154 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:09:18.154 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:09:18.154 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:09:18.154 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:09:18.154 [211/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:09:18.154 [212/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:09:18.413 [213/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:09:18.413 [214/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:09:18.413 [215/265] Linking static target drivers/librte_bus_vdev.a 00:09:18.413 [216/265] Linking static target drivers/librte_bus_pci.a 00:09:18.413 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:09:18.413 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:09:18.413 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:18.672 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:09:18.672 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:09:18.672 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:09:18.672 [223/265] Linking static target drivers/librte_mempool_ring.a 00:09:18.931 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:09:19.870 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:09:23.157 [226/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:09:23.157 [227/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:09:23.157 [228/265] Linking static target lib/librte_vhost.a 00:09:23.417 [229/265] Linking target lib/librte_eal.so.24.0 00:09:23.417 [230/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:09:23.417 [231/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:09:23.417 [232/265] Linking target lib/librte_meter.so.24.0 00:09:23.417 [233/265] Linking target lib/librte_pci.so.24.0 00:09:23.417 [234/265] Linking target lib/librte_dmadev.so.24.0 00:09:23.417 [235/265] Linking target lib/librte_ring.so.24.0 00:09:23.417 [236/265] Linking target lib/librte_timer.so.24.0 00:09:23.417 [237/265] Linking target drivers/librte_bus_vdev.so.24.0 00:09:23.676 [238/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:09:23.676 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:09:23.676 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:09:23.676 [241/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:09:23.676 [242/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:09:23.676 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:09:23.676 [244/265] Linking target lib/librte_rcu.so.24.0 00:09:23.676 [245/265] Linking target lib/librte_mempool.so.24.0 00:09:23.935 [246/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:09:23.935 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:09:23.935 [248/265] Linking target drivers/librte_mempool_ring.so.24.0 00:09:23.935 [249/265] Linking target lib/librte_mbuf.so.24.0 00:09:23.935 [250/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:09:24.193 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:09:24.193 [252/265] Linking target lib/librte_compressdev.so.24.0 00:09:24.193 [253/265] Linking target lib/librte_reorder.so.24.0 00:09:24.193 [254/265] Linking target lib/librte_net.so.24.0 00:09:24.193 [255/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:09:24.193 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:09:24.452 [257/265] Linking target lib/librte_hash.so.24.0 00:09:24.452 [258/265] Linking target lib/librte_security.so.24.0 00:09:24.452 [259/265] Linking target lib/librte_cmdline.so.24.0 00:09:24.452 [260/265] Linking target lib/librte_ethdev.so.24.0 00:09:24.452 [261/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:09:24.452 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:09:24.452 [263/265] Linking target lib/librte_power.so.24.0 00:09:25.387 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:09:25.645 [265/265] Linking target lib/librte_vhost.so.24.0 00:09:25.645 INFO: autodetecting backend as ninja 00:09:25.645 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:09:27.024 CC lib/log/log_deprecated.o 00:09:27.024 CC lib/ut/ut.o 00:09:27.024 CC lib/log/log.o 00:09:27.024 CC lib/log/log_flags.o 00:09:27.024 CC lib/ut_mock/mock.o 00:09:27.024 LIB libspdk_ut_mock.a 00:09:27.024 LIB libspdk_ut.a 00:09:27.024 SO libspdk_ut_mock.so.6.0 00:09:27.024 LIB libspdk_log.a 00:09:27.024 SO libspdk_ut.so.2.0 00:09:27.024 SO libspdk_log.so.7.0 00:09:27.024 SYMLINK libspdk_ut_mock.so 00:09:27.024 SYMLINK libspdk_ut.so 00:09:27.024 SYMLINK libspdk_log.so 00:09:27.283 CC lib/dma/dma.o 00:09:27.283 CXX lib/trace_parser/trace.o 00:09:27.284 CC lib/ioat/ioat.o 00:09:27.284 CC lib/util/base64.o 00:09:27.284 CC lib/util/bit_array.o 00:09:27.543 CC lib/util/cpuset.o 00:09:27.543 CC lib/util/crc16.o 00:09:27.543 CC lib/util/crc32.o 00:09:27.543 CC lib/util/crc32c.o 00:09:27.543 CC lib/vfio_user/host/vfio_user_pci.o 00:09:27.543 CC lib/vfio_user/host/vfio_user.o 00:09:27.543 CC lib/util/crc32_ieee.o 00:09:27.543 LIB libspdk_dma.a 00:09:27.543 CC lib/util/crc64.o 00:09:27.543 SO libspdk_dma.so.4.0 00:09:27.543 CC lib/util/dif.o 00:09:27.543 CC lib/util/fd.o 00:09:27.543 CC lib/util/file.o 00:09:27.543 SYMLINK libspdk_dma.so 00:09:27.802 CC lib/util/hexlify.o 00:09:27.802 CC lib/util/iov.o 00:09:27.802 LIB libspdk_ioat.a 00:09:27.802 CC lib/util/math.o 00:09:27.802 SO libspdk_ioat.so.7.0 00:09:27.802 CC lib/util/pipe.o 00:09:27.802 LIB libspdk_vfio_user.a 00:09:27.802 CC lib/util/strerror_tls.o 00:09:27.802 CC lib/util/string.o 00:09:27.802 SYMLINK libspdk_ioat.so 00:09:27.802 CC lib/util/uuid.o 00:09:27.802 SO libspdk_vfio_user.so.5.0 00:09:27.802 CC lib/util/fd_group.o 00:09:27.802 CC lib/util/xor.o 00:09:27.802 SYMLINK libspdk_vfio_user.so 00:09:27.802 CC lib/util/zipf.o 00:09:28.370 LIB libspdk_util.a 00:09:28.371 LIB libspdk_trace_parser.a 00:09:28.371 SO libspdk_util.so.9.0 00:09:28.371 SO libspdk_trace_parser.so.5.0 00:09:28.630 SYMLINK libspdk_trace_parser.so 00:09:28.630 SYMLINK libspdk_util.so 00:09:28.889 CC lib/rdma/common.o 00:09:28.889 CC lib/rdma/rdma_verbs.o 00:09:28.889 CC lib/json/json_parse.o 00:09:28.889 CC lib/env_dpdk/env.o 00:09:28.889 CC lib/json/json_util.o 00:09:28.889 CC lib/json/json_write.o 00:09:28.889 CC lib/env_dpdk/memory.o 00:09:28.889 CC lib/idxd/idxd.o 00:09:28.889 CC lib/vmd/vmd.o 00:09:28.889 CC lib/conf/conf.o 00:09:28.889 CC lib/vmd/led.o 00:09:29.161 CC lib/idxd/idxd_user.o 00:09:29.161 LIB libspdk_conf.a 00:09:29.161 CC lib/env_dpdk/pci.o 00:09:29.161 SO libspdk_conf.so.6.0 00:09:29.161 LIB libspdk_rdma.a 00:09:29.161 LIB libspdk_json.a 00:09:29.161 SO libspdk_rdma.so.6.0 00:09:29.161 SO libspdk_json.so.6.0 00:09:29.161 SYMLINK libspdk_conf.so 00:09:29.161 CC lib/env_dpdk/init.o 00:09:29.161 CC lib/env_dpdk/threads.o 00:09:29.161 SYMLINK libspdk_json.so 00:09:29.161 SYMLINK libspdk_rdma.so 00:09:29.161 CC lib/env_dpdk/pci_ioat.o 00:09:29.161 CC lib/env_dpdk/pci_virtio.o 00:09:29.444 CC lib/env_dpdk/pci_vmd.o 00:09:29.444 CC lib/env_dpdk/pci_idxd.o 00:09:29.444 CC lib/jsonrpc/jsonrpc_server.o 00:09:29.444 CC lib/env_dpdk/pci_event.o 00:09:29.444 LIB libspdk_idxd.a 00:09:29.444 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:09:29.444 CC lib/env_dpdk/sigbus_handler.o 00:09:29.444 SO libspdk_idxd.so.12.0 00:09:29.444 CC lib/env_dpdk/pci_dpdk.o 00:09:29.444 CC lib/env_dpdk/pci_dpdk_2207.o 00:09:29.444 CC lib/env_dpdk/pci_dpdk_2211.o 00:09:29.444 SYMLINK libspdk_idxd.so 00:09:29.444 CC lib/jsonrpc/jsonrpc_client.o 00:09:29.444 LIB libspdk_vmd.a 00:09:29.444 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:09:29.703 SO libspdk_vmd.so.6.0 00:09:29.703 SYMLINK libspdk_vmd.so 00:09:29.703 LIB libspdk_jsonrpc.a 00:09:29.962 SO libspdk_jsonrpc.so.6.0 00:09:29.962 SYMLINK libspdk_jsonrpc.so 00:09:30.222 LIB libspdk_env_dpdk.a 00:09:30.222 CC lib/rpc/rpc.o 00:09:30.481 SO libspdk_env_dpdk.so.14.0 00:09:30.481 LIB libspdk_rpc.a 00:09:30.740 SYMLINK libspdk_env_dpdk.so 00:09:30.740 SO libspdk_rpc.so.6.0 00:09:30.740 SYMLINK libspdk_rpc.so 00:09:30.998 CC lib/notify/notify.o 00:09:30.998 CC lib/notify/notify_rpc.o 00:09:30.998 CC lib/trace/trace.o 00:09:30.998 CC lib/trace/trace_rpc.o 00:09:30.998 CC lib/trace/trace_flags.o 00:09:30.998 CC lib/keyring/keyring.o 00:09:30.998 CC lib/keyring/keyring_rpc.o 00:09:31.257 LIB libspdk_notify.a 00:09:31.257 SO libspdk_notify.so.6.0 00:09:31.257 LIB libspdk_trace.a 00:09:31.257 LIB libspdk_keyring.a 00:09:31.257 SYMLINK libspdk_notify.so 00:09:31.257 SO libspdk_trace.so.10.0 00:09:31.679 SO libspdk_keyring.so.1.0 00:09:31.679 SYMLINK libspdk_trace.so 00:09:31.679 SYMLINK libspdk_keyring.so 00:09:31.937 CC lib/thread/thread.o 00:09:31.937 CC lib/thread/iobuf.o 00:09:31.937 CC lib/sock/sock.o 00:09:31.937 CC lib/sock/sock_rpc.o 00:09:32.194 LIB libspdk_sock.a 00:09:32.194 SO libspdk_sock.so.9.0 00:09:32.452 SYMLINK libspdk_sock.so 00:09:32.711 CC lib/nvme/nvme_ctrlr_cmd.o 00:09:32.711 CC lib/nvme/nvme_fabric.o 00:09:32.711 CC lib/nvme/nvme_ctrlr.o 00:09:32.711 CC lib/nvme/nvme_ns_cmd.o 00:09:32.711 CC lib/nvme/nvme_ns.o 00:09:32.711 CC lib/nvme/nvme_pcie_common.o 00:09:32.711 CC lib/nvme/nvme_pcie.o 00:09:32.711 CC lib/nvme/nvme_qpair.o 00:09:32.711 CC lib/nvme/nvme.o 00:09:33.277 CC lib/nvme/nvme_quirks.o 00:09:33.277 CC lib/nvme/nvme_transport.o 00:09:33.536 CC lib/nvme/nvme_discovery.o 00:09:33.536 LIB libspdk_thread.a 00:09:33.536 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:09:33.536 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:09:33.536 SO libspdk_thread.so.10.0 00:09:33.536 CC lib/nvme/nvme_tcp.o 00:09:33.536 CC lib/nvme/nvme_opal.o 00:09:33.799 SYMLINK libspdk_thread.so 00:09:33.799 CC lib/accel/accel.o 00:09:33.799 CC lib/nvme/nvme_io_msg.o 00:09:33.799 CC lib/nvme/nvme_poll_group.o 00:09:34.065 CC lib/accel/accel_rpc.o 00:09:34.065 CC lib/accel/accel_sw.o 00:09:34.065 CC lib/nvme/nvme_zns.o 00:09:34.065 CC lib/nvme/nvme_stubs.o 00:09:34.065 CC lib/nvme/nvme_auth.o 00:09:34.333 CC lib/nvme/nvme_cuse.o 00:09:34.333 CC lib/nvme/nvme_rdma.o 00:09:34.604 CC lib/blob/blobstore.o 00:09:34.604 CC lib/init/json_config.o 00:09:34.604 CC lib/blob/request.o 00:09:34.604 CC lib/blob/zeroes.o 00:09:34.877 CC lib/init/subsystem.o 00:09:34.877 CC lib/blob/blob_bs_dev.o 00:09:34.877 LIB libspdk_accel.a 00:09:34.877 SO libspdk_accel.so.15.0 00:09:34.877 CC lib/init/subsystem_rpc.o 00:09:34.877 CC lib/init/rpc.o 00:09:35.148 SYMLINK libspdk_accel.so 00:09:35.148 CC lib/virtio/virtio_vfio_user.o 00:09:35.148 CC lib/virtio/virtio.o 00:09:35.148 CC lib/virtio/virtio_pci.o 00:09:35.148 CC lib/virtio/virtio_vhost_user.o 00:09:35.148 LIB libspdk_init.a 00:09:35.148 SO libspdk_init.so.5.0 00:09:35.148 CC lib/bdev/bdev.o 00:09:35.148 CC lib/bdev/bdev_rpc.o 00:09:35.148 CC lib/bdev/bdev_zone.o 00:09:35.406 SYMLINK libspdk_init.so 00:09:35.406 CC lib/bdev/part.o 00:09:35.406 CC lib/bdev/scsi_nvme.o 00:09:35.406 LIB libspdk_virtio.a 00:09:35.665 CC lib/event/app.o 00:09:35.665 CC lib/event/reactor.o 00:09:35.665 CC lib/event/log_rpc.o 00:09:35.665 SO libspdk_virtio.so.7.0 00:09:35.665 CC lib/event/app_rpc.o 00:09:35.665 CC lib/event/scheduler_static.o 00:09:35.665 SYMLINK libspdk_virtio.so 00:09:35.665 LIB libspdk_nvme.a 00:09:35.924 SO libspdk_nvme.so.13.0 00:09:35.924 LIB libspdk_event.a 00:09:36.183 SO libspdk_event.so.13.0 00:09:36.184 SYMLINK libspdk_event.so 00:09:36.184 SYMLINK libspdk_nvme.so 00:09:38.090 LIB libspdk_blob.a 00:09:38.090 SO libspdk_blob.so.11.0 00:09:38.090 LIB libspdk_bdev.a 00:09:38.090 SYMLINK libspdk_blob.so 00:09:38.090 SO libspdk_bdev.so.15.0 00:09:38.349 SYMLINK libspdk_bdev.so 00:09:38.349 CC lib/blobfs/blobfs.o 00:09:38.349 CC lib/blobfs/tree.o 00:09:38.349 CC lib/lvol/lvol.o 00:09:38.349 CC lib/ftl/ftl_core.o 00:09:38.349 CC lib/ftl/ftl_init.o 00:09:38.349 CC lib/ftl/ftl_layout.o 00:09:38.349 CC lib/scsi/dev.o 00:09:38.608 CC lib/ublk/ublk.o 00:09:38.608 CC lib/nvmf/ctrlr.o 00:09:38.608 CC lib/nbd/nbd.o 00:09:38.608 CC lib/nvmf/ctrlr_discovery.o 00:09:38.608 CC lib/nvmf/ctrlr_bdev.o 00:09:38.608 CC lib/scsi/lun.o 00:09:38.867 CC lib/scsi/port.o 00:09:38.867 CC lib/ftl/ftl_debug.o 00:09:38.867 CC lib/nbd/nbd_rpc.o 00:09:38.867 CC lib/ftl/ftl_io.o 00:09:39.162 CC lib/scsi/scsi.o 00:09:39.162 CC lib/nvmf/subsystem.o 00:09:39.162 CC lib/ublk/ublk_rpc.o 00:09:39.162 LIB libspdk_nbd.a 00:09:39.162 SO libspdk_nbd.so.7.0 00:09:39.162 CC lib/scsi/scsi_bdev.o 00:09:39.162 SYMLINK libspdk_nbd.so 00:09:39.162 CC lib/nvmf/nvmf.o 00:09:39.162 CC lib/scsi/scsi_pr.o 00:09:39.162 CC lib/ftl/ftl_sb.o 00:09:39.421 LIB libspdk_ublk.a 00:09:39.421 LIB libspdk_blobfs.a 00:09:39.421 SO libspdk_ublk.so.3.0 00:09:39.421 LIB libspdk_lvol.a 00:09:39.421 SO libspdk_blobfs.so.10.0 00:09:39.421 CC lib/ftl/ftl_l2p.o 00:09:39.421 CC lib/nvmf/nvmf_rpc.o 00:09:39.421 SO libspdk_lvol.so.10.0 00:09:39.421 SYMLINK libspdk_ublk.so 00:09:39.421 CC lib/ftl/ftl_l2p_flat.o 00:09:39.680 SYMLINK libspdk_blobfs.so 00:09:39.680 CC lib/ftl/ftl_nv_cache.o 00:09:39.680 SYMLINK libspdk_lvol.so 00:09:39.680 CC lib/ftl/ftl_band.o 00:09:39.680 CC lib/ftl/ftl_band_ops.o 00:09:39.680 CC lib/scsi/scsi_rpc.o 00:09:39.680 CC lib/ftl/ftl_writer.o 00:09:39.680 CC lib/ftl/ftl_rq.o 00:09:39.938 CC lib/scsi/task.o 00:09:39.938 CC lib/nvmf/transport.o 00:09:39.938 CC lib/ftl/ftl_reloc.o 00:09:39.938 CC lib/nvmf/tcp.o 00:09:39.938 CC lib/ftl/ftl_l2p_cache.o 00:09:40.197 LIB libspdk_scsi.a 00:09:40.197 SO libspdk_scsi.so.9.0 00:09:40.197 CC lib/nvmf/rdma.o 00:09:40.197 SYMLINK libspdk_scsi.so 00:09:40.197 CC lib/ftl/ftl_p2l.o 00:09:40.455 CC lib/ftl/mngt/ftl_mngt.o 00:09:40.455 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:09:40.455 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:09:40.455 CC lib/ftl/mngt/ftl_mngt_startup.o 00:09:40.714 CC lib/ftl/mngt/ftl_mngt_md.o 00:09:40.714 CC lib/ftl/mngt/ftl_mngt_misc.o 00:09:40.714 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:09:40.714 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:09:40.714 CC lib/ftl/mngt/ftl_mngt_band.o 00:09:40.714 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:09:40.714 CC lib/vhost/vhost.o 00:09:40.972 CC lib/iscsi/conn.o 00:09:40.972 CC lib/iscsi/init_grp.o 00:09:40.972 CC lib/vhost/vhost_rpc.o 00:09:40.972 CC lib/vhost/vhost_scsi.o 00:09:40.972 CC lib/vhost/vhost_blk.o 00:09:40.972 CC lib/vhost/rte_vhost_user.o 00:09:40.972 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:09:41.231 CC lib/iscsi/iscsi.o 00:09:41.231 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:09:41.490 CC lib/iscsi/md5.o 00:09:41.490 CC lib/iscsi/param.o 00:09:41.490 CC lib/iscsi/portal_grp.o 00:09:41.749 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:09:41.749 CC lib/iscsi/tgt_node.o 00:09:41.749 CC lib/iscsi/iscsi_subsystem.o 00:09:41.749 CC lib/ftl/utils/ftl_conf.o 00:09:42.007 CC lib/iscsi/iscsi_rpc.o 00:09:42.007 CC lib/iscsi/task.o 00:09:42.007 CC lib/ftl/utils/ftl_md.o 00:09:42.007 CC lib/ftl/utils/ftl_mempool.o 00:09:42.007 CC lib/ftl/utils/ftl_bitmap.o 00:09:42.007 CC lib/ftl/utils/ftl_property.o 00:09:42.007 LIB libspdk_vhost.a 00:09:42.266 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:09:42.266 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:09:42.266 SO libspdk_vhost.so.8.0 00:09:42.266 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:09:42.266 SYMLINK libspdk_vhost.so 00:09:42.266 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:09:42.266 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:09:42.266 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:09:42.266 CC lib/ftl/upgrade/ftl_sb_v3.o 00:09:42.526 CC lib/ftl/upgrade/ftl_sb_v5.o 00:09:42.526 CC lib/ftl/nvc/ftl_nvc_dev.o 00:09:42.526 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:09:42.526 CC lib/ftl/base/ftl_base_dev.o 00:09:42.526 CC lib/ftl/base/ftl_base_bdev.o 00:09:42.526 CC lib/ftl/ftl_trace.o 00:09:42.784 LIB libspdk_ftl.a 00:09:42.784 LIB libspdk_nvmf.a 00:09:42.784 LIB libspdk_iscsi.a 00:09:43.043 SO libspdk_iscsi.so.8.0 00:09:43.043 SO libspdk_nvmf.so.18.0 00:09:43.043 SO libspdk_ftl.so.9.0 00:09:43.302 SYMLINK libspdk_iscsi.so 00:09:43.302 SYMLINK libspdk_nvmf.so 00:09:43.560 SYMLINK libspdk_ftl.so 00:09:43.856 CC module/env_dpdk/env_dpdk_rpc.o 00:09:43.856 CC module/blob/bdev/blob_bdev.o 00:09:43.856 CC module/scheduler/dynamic/scheduler_dynamic.o 00:09:43.856 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:09:43.856 CC module/keyring/file/keyring.o 00:09:43.856 CC module/accel/dsa/accel_dsa.o 00:09:43.856 CC module/sock/posix/posix.o 00:09:44.114 CC module/scheduler/gscheduler/gscheduler.o 00:09:44.114 CC module/accel/error/accel_error.o 00:09:44.114 CC module/accel/ioat/accel_ioat.o 00:09:44.114 LIB libspdk_env_dpdk_rpc.a 00:09:44.114 SO libspdk_env_dpdk_rpc.so.6.0 00:09:44.114 SYMLINK libspdk_env_dpdk_rpc.so 00:09:44.114 CC module/keyring/file/keyring_rpc.o 00:09:44.114 CC module/accel/error/accel_error_rpc.o 00:09:44.114 LIB libspdk_scheduler_dpdk_governor.a 00:09:44.114 LIB libspdk_scheduler_gscheduler.a 00:09:44.114 SO libspdk_scheduler_dpdk_governor.so.4.0 00:09:44.114 SO libspdk_scheduler_gscheduler.so.4.0 00:09:44.114 LIB libspdk_scheduler_dynamic.a 00:09:44.114 CC module/accel/dsa/accel_dsa_rpc.o 00:09:44.114 CC module/accel/ioat/accel_ioat_rpc.o 00:09:44.114 SO libspdk_scheduler_dynamic.so.4.0 00:09:44.114 SYMLINK libspdk_scheduler_gscheduler.so 00:09:44.114 LIB libspdk_blob_bdev.a 00:09:44.114 SYMLINK libspdk_scheduler_dpdk_governor.so 00:09:44.114 LIB libspdk_accel_error.a 00:09:44.114 SYMLINK libspdk_scheduler_dynamic.so 00:09:44.373 LIB libspdk_keyring_file.a 00:09:44.373 SO libspdk_blob_bdev.so.11.0 00:09:44.373 SO libspdk_accel_error.so.2.0 00:09:44.373 SO libspdk_keyring_file.so.1.0 00:09:44.373 LIB libspdk_accel_ioat.a 00:09:44.373 LIB libspdk_accel_dsa.a 00:09:44.373 SYMLINK libspdk_blob_bdev.so 00:09:44.373 SYMLINK libspdk_keyring_file.so 00:09:44.373 SO libspdk_accel_ioat.so.6.0 00:09:44.373 SYMLINK libspdk_accel_error.so 00:09:44.373 SO libspdk_accel_dsa.so.5.0 00:09:44.373 CC module/accel/iaa/accel_iaa.o 00:09:44.373 CC module/accel/iaa/accel_iaa_rpc.o 00:09:44.373 SYMLINK libspdk_accel_ioat.so 00:09:44.373 SYMLINK libspdk_accel_dsa.so 00:09:44.632 CC module/blobfs/bdev/blobfs_bdev.o 00:09:44.632 CC module/bdev/lvol/vbdev_lvol.o 00:09:44.632 LIB libspdk_accel_iaa.a 00:09:44.632 CC module/bdev/error/vbdev_error.o 00:09:44.632 CC module/bdev/delay/vbdev_delay.o 00:09:44.632 CC module/bdev/malloc/bdev_malloc.o 00:09:44.632 CC module/bdev/null/bdev_null.o 00:09:44.632 CC module/bdev/gpt/gpt.o 00:09:44.632 SO libspdk_accel_iaa.so.3.0 00:09:44.632 CC module/bdev/nvme/bdev_nvme.o 00:09:44.632 SYMLINK libspdk_accel_iaa.so 00:09:44.632 CC module/bdev/malloc/bdev_malloc_rpc.o 00:09:44.632 LIB libspdk_sock_posix.a 00:09:44.890 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:09:44.890 SO libspdk_sock_posix.so.6.0 00:09:44.890 CC module/bdev/gpt/vbdev_gpt.o 00:09:44.890 CC module/bdev/nvme/bdev_nvme_rpc.o 00:09:44.890 CC module/bdev/error/vbdev_error_rpc.o 00:09:44.890 SYMLINK libspdk_sock_posix.so 00:09:44.890 CC module/bdev/null/bdev_null_rpc.o 00:09:44.890 CC module/bdev/nvme/nvme_rpc.o 00:09:44.890 LIB libspdk_blobfs_bdev.a 00:09:44.890 SO libspdk_blobfs_bdev.so.6.0 00:09:44.890 CC module/bdev/delay/vbdev_delay_rpc.o 00:09:44.890 LIB libspdk_bdev_malloc.a 00:09:45.150 SO libspdk_bdev_malloc.so.6.0 00:09:45.150 LIB libspdk_bdev_error.a 00:09:45.150 LIB libspdk_bdev_null.a 00:09:45.150 SYMLINK libspdk_blobfs_bdev.so 00:09:45.150 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:09:45.150 SO libspdk_bdev_error.so.6.0 00:09:45.150 SO libspdk_bdev_null.so.6.0 00:09:45.150 LIB libspdk_bdev_gpt.a 00:09:45.150 CC module/bdev/nvme/bdev_mdns_client.o 00:09:45.150 SYMLINK libspdk_bdev_malloc.so 00:09:45.150 CC module/bdev/nvme/vbdev_opal.o 00:09:45.150 CC module/bdev/nvme/vbdev_opal_rpc.o 00:09:45.150 SO libspdk_bdev_gpt.so.6.0 00:09:45.150 SYMLINK libspdk_bdev_error.so 00:09:45.150 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:09:45.150 LIB libspdk_bdev_delay.a 00:09:45.150 SYMLINK libspdk_bdev_null.so 00:09:45.150 SYMLINK libspdk_bdev_gpt.so 00:09:45.150 SO libspdk_bdev_delay.so.6.0 00:09:45.409 SYMLINK libspdk_bdev_delay.so 00:09:45.409 LIB libspdk_bdev_lvol.a 00:09:45.409 CC module/bdev/raid/bdev_raid.o 00:09:45.409 CC module/bdev/passthru/vbdev_passthru.o 00:09:45.409 SO libspdk_bdev_lvol.so.6.0 00:09:45.410 CC module/bdev/split/vbdev_split.o 00:09:45.410 CC module/bdev/zone_block/vbdev_zone_block.o 00:09:45.410 CC module/bdev/aio/bdev_aio.o 00:09:45.410 SYMLINK libspdk_bdev_lvol.so 00:09:45.410 CC module/bdev/aio/bdev_aio_rpc.o 00:09:45.668 CC module/bdev/ftl/bdev_ftl.o 00:09:45.668 CC module/bdev/iscsi/bdev_iscsi.o 00:09:45.668 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:09:45.668 CC module/bdev/raid/bdev_raid_rpc.o 00:09:45.668 CC module/bdev/virtio/bdev_virtio_scsi.o 00:09:45.668 CC module/bdev/split/vbdev_split_rpc.o 00:09:45.927 LIB libspdk_bdev_passthru.a 00:09:45.927 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:09:45.927 CC module/bdev/ftl/bdev_ftl_rpc.o 00:09:45.927 LIB libspdk_bdev_split.a 00:09:45.927 LIB libspdk_bdev_aio.a 00:09:45.927 SO libspdk_bdev_passthru.so.6.0 00:09:45.927 SO libspdk_bdev_split.so.6.0 00:09:45.927 SO libspdk_bdev_aio.so.6.0 00:09:45.927 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:09:45.927 SYMLINK libspdk_bdev_passthru.so 00:09:45.927 CC module/bdev/virtio/bdev_virtio_blk.o 00:09:45.927 SYMLINK libspdk_bdev_split.so 00:09:45.927 CC module/bdev/raid/bdev_raid_sb.o 00:09:45.927 SYMLINK libspdk_bdev_aio.so 00:09:45.927 CC module/bdev/raid/raid0.o 00:09:45.927 CC module/bdev/virtio/bdev_virtio_rpc.o 00:09:45.927 LIB libspdk_bdev_zone_block.a 00:09:45.927 SO libspdk_bdev_zone_block.so.6.0 00:09:46.186 LIB libspdk_bdev_ftl.a 00:09:46.186 SO libspdk_bdev_ftl.so.6.0 00:09:46.186 LIB libspdk_bdev_iscsi.a 00:09:46.186 SYMLINK libspdk_bdev_zone_block.so 00:09:46.186 CC module/bdev/raid/raid1.o 00:09:46.186 SO libspdk_bdev_iscsi.so.6.0 00:09:46.186 SYMLINK libspdk_bdev_ftl.so 00:09:46.186 CC module/bdev/raid/concat.o 00:09:46.186 SYMLINK libspdk_bdev_iscsi.so 00:09:46.186 LIB libspdk_bdev_virtio.a 00:09:46.445 SO libspdk_bdev_virtio.so.6.0 00:09:46.445 SYMLINK libspdk_bdev_virtio.so 00:09:46.445 LIB libspdk_bdev_raid.a 00:09:46.445 SO libspdk_bdev_raid.so.6.0 00:09:46.703 SYMLINK libspdk_bdev_raid.so 00:09:47.269 LIB libspdk_bdev_nvme.a 00:09:47.269 SO libspdk_bdev_nvme.so.7.0 00:09:47.269 SYMLINK libspdk_bdev_nvme.so 00:09:47.899 CC module/event/subsystems/sock/sock.o 00:09:47.899 CC module/event/subsystems/vmd/vmd.o 00:09:47.899 CC module/event/subsystems/vmd/vmd_rpc.o 00:09:47.899 CC module/event/subsystems/scheduler/scheduler.o 00:09:47.899 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:09:47.899 CC module/event/subsystems/keyring/keyring.o 00:09:47.899 CC module/event/subsystems/iobuf/iobuf.o 00:09:47.899 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:09:48.159 LIB libspdk_event_sock.a 00:09:48.159 LIB libspdk_event_vmd.a 00:09:48.159 LIB libspdk_event_vhost_blk.a 00:09:48.159 LIB libspdk_event_keyring.a 00:09:48.159 LIB libspdk_event_scheduler.a 00:09:48.159 LIB libspdk_event_iobuf.a 00:09:48.159 SO libspdk_event_sock.so.5.0 00:09:48.159 SO libspdk_event_vhost_blk.so.3.0 00:09:48.159 SO libspdk_event_vmd.so.6.0 00:09:48.159 SO libspdk_event_scheduler.so.4.0 00:09:48.159 SO libspdk_event_keyring.so.1.0 00:09:48.159 SO libspdk_event_iobuf.so.3.0 00:09:48.159 SYMLINK libspdk_event_sock.so 00:09:48.159 SYMLINK libspdk_event_scheduler.so 00:09:48.159 SYMLINK libspdk_event_vmd.so 00:09:48.159 SYMLINK libspdk_event_keyring.so 00:09:48.159 SYMLINK libspdk_event_iobuf.so 00:09:48.159 SYMLINK libspdk_event_vhost_blk.so 00:09:48.726 CC module/event/subsystems/accel/accel.o 00:09:48.726 LIB libspdk_event_accel.a 00:09:48.985 SO libspdk_event_accel.so.6.0 00:09:48.985 SYMLINK libspdk_event_accel.so 00:09:49.243 CC module/event/subsystems/bdev/bdev.o 00:09:49.502 LIB libspdk_event_bdev.a 00:09:49.502 SO libspdk_event_bdev.so.6.0 00:09:49.761 SYMLINK libspdk_event_bdev.so 00:09:50.020 CC module/event/subsystems/scsi/scsi.o 00:09:50.020 CC module/event/subsystems/ublk/ublk.o 00:09:50.020 CC module/event/subsystems/nbd/nbd.o 00:09:50.020 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:09:50.020 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:09:50.020 LIB libspdk_event_ublk.a 00:09:50.278 LIB libspdk_event_scsi.a 00:09:50.278 LIB libspdk_event_nbd.a 00:09:50.278 SO libspdk_event_ublk.so.3.0 00:09:50.278 SO libspdk_event_scsi.so.6.0 00:09:50.278 SO libspdk_event_nbd.so.6.0 00:09:50.278 SYMLINK libspdk_event_ublk.so 00:09:50.278 SYMLINK libspdk_event_scsi.so 00:09:50.278 SYMLINK libspdk_event_nbd.so 00:09:50.278 LIB libspdk_event_nvmf.a 00:09:50.278 SO libspdk_event_nvmf.so.6.0 00:09:50.278 SYMLINK libspdk_event_nvmf.so 00:09:50.537 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:09:50.537 CC module/event/subsystems/iscsi/iscsi.o 00:09:50.796 LIB libspdk_event_vhost_scsi.a 00:09:50.796 SO libspdk_event_vhost_scsi.so.3.0 00:09:50.796 LIB libspdk_event_iscsi.a 00:09:50.796 SO libspdk_event_iscsi.so.6.0 00:09:50.796 SYMLINK libspdk_event_vhost_scsi.so 00:09:50.796 SYMLINK libspdk_event_iscsi.so 00:09:51.054 SO libspdk.so.6.0 00:09:51.054 SYMLINK libspdk.so 00:09:51.313 CXX app/trace/trace.o 00:09:51.313 CC app/trace_record/trace_record.o 00:09:51.572 TEST_HEADER include/spdk/accel.h 00:09:51.572 TEST_HEADER include/spdk/accel_module.h 00:09:51.572 TEST_HEADER include/spdk/assert.h 00:09:51.572 TEST_HEADER include/spdk/barrier.h 00:09:51.572 TEST_HEADER include/spdk/base64.h 00:09:51.572 TEST_HEADER include/spdk/bdev.h 00:09:51.572 TEST_HEADER include/spdk/bdev_module.h 00:09:51.572 TEST_HEADER include/spdk/bdev_zone.h 00:09:51.572 TEST_HEADER include/spdk/bit_array.h 00:09:51.572 TEST_HEADER include/spdk/bit_pool.h 00:09:51.572 TEST_HEADER include/spdk/blob_bdev.h 00:09:51.572 TEST_HEADER include/spdk/blobfs_bdev.h 00:09:51.572 TEST_HEADER include/spdk/blobfs.h 00:09:51.572 TEST_HEADER include/spdk/blob.h 00:09:51.572 TEST_HEADER include/spdk/conf.h 00:09:51.572 TEST_HEADER include/spdk/config.h 00:09:51.572 TEST_HEADER include/spdk/cpuset.h 00:09:51.572 TEST_HEADER include/spdk/crc16.h 00:09:51.572 TEST_HEADER include/spdk/crc32.h 00:09:51.572 TEST_HEADER include/spdk/crc64.h 00:09:51.572 TEST_HEADER include/spdk/dif.h 00:09:51.572 TEST_HEADER include/spdk/dma.h 00:09:51.572 TEST_HEADER include/spdk/endian.h 00:09:51.572 TEST_HEADER include/spdk/env_dpdk.h 00:09:51.572 TEST_HEADER include/spdk/env.h 00:09:51.572 TEST_HEADER include/spdk/event.h 00:09:51.572 TEST_HEADER include/spdk/fd_group.h 00:09:51.572 TEST_HEADER include/spdk/fd.h 00:09:51.572 TEST_HEADER include/spdk/file.h 00:09:51.572 TEST_HEADER include/spdk/ftl.h 00:09:51.572 TEST_HEADER include/spdk/gpt_spec.h 00:09:51.572 TEST_HEADER include/spdk/hexlify.h 00:09:51.572 TEST_HEADER include/spdk/histogram_data.h 00:09:51.572 TEST_HEADER include/spdk/idxd.h 00:09:51.572 TEST_HEADER include/spdk/idxd_spec.h 00:09:51.572 TEST_HEADER include/spdk/init.h 00:09:51.572 TEST_HEADER include/spdk/ioat.h 00:09:51.572 TEST_HEADER include/spdk/ioat_spec.h 00:09:51.572 TEST_HEADER include/spdk/iscsi_spec.h 00:09:51.572 TEST_HEADER include/spdk/json.h 00:09:51.572 TEST_HEADER include/spdk/jsonrpc.h 00:09:51.572 CC examples/accel/perf/accel_perf.o 00:09:51.572 TEST_HEADER include/spdk/keyring.h 00:09:51.572 TEST_HEADER include/spdk/keyring_module.h 00:09:51.572 TEST_HEADER include/spdk/likely.h 00:09:51.572 TEST_HEADER include/spdk/log.h 00:09:51.572 CC test/accel/dif/dif.o 00:09:51.572 TEST_HEADER include/spdk/lvol.h 00:09:51.572 TEST_HEADER include/spdk/memory.h 00:09:51.572 TEST_HEADER include/spdk/mmio.h 00:09:51.572 TEST_HEADER include/spdk/nbd.h 00:09:51.572 TEST_HEADER include/spdk/notify.h 00:09:51.572 TEST_HEADER include/spdk/nvme.h 00:09:51.572 CC test/blobfs/mkfs/mkfs.o 00:09:51.572 TEST_HEADER include/spdk/nvme_intel.h 00:09:51.572 TEST_HEADER include/spdk/nvme_ocssd.h 00:09:51.572 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:09:51.572 TEST_HEADER include/spdk/nvme_spec.h 00:09:51.572 TEST_HEADER include/spdk/nvme_zns.h 00:09:51.572 TEST_HEADER include/spdk/nvmf_cmd.h 00:09:51.572 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:09:51.572 CC test/app/bdev_svc/bdev_svc.o 00:09:51.572 CC test/dma/test_dma/test_dma.o 00:09:51.572 TEST_HEADER include/spdk/nvmf.h 00:09:51.572 TEST_HEADER include/spdk/nvmf_spec.h 00:09:51.572 CC test/bdev/bdevio/bdevio.o 00:09:51.572 TEST_HEADER include/spdk/nvmf_transport.h 00:09:51.572 TEST_HEADER include/spdk/opal.h 00:09:51.572 TEST_HEADER include/spdk/opal_spec.h 00:09:51.572 TEST_HEADER include/spdk/pci_ids.h 00:09:51.572 TEST_HEADER include/spdk/pipe.h 00:09:51.572 TEST_HEADER include/spdk/queue.h 00:09:51.572 CC test/env/mem_callbacks/mem_callbacks.o 00:09:51.572 TEST_HEADER include/spdk/reduce.h 00:09:51.572 TEST_HEADER include/spdk/rpc.h 00:09:51.572 TEST_HEADER include/spdk/scheduler.h 00:09:51.572 TEST_HEADER include/spdk/scsi.h 00:09:51.572 TEST_HEADER include/spdk/scsi_spec.h 00:09:51.572 TEST_HEADER include/spdk/sock.h 00:09:51.572 TEST_HEADER include/spdk/stdinc.h 00:09:51.572 TEST_HEADER include/spdk/string.h 00:09:51.572 TEST_HEADER include/spdk/thread.h 00:09:51.572 TEST_HEADER include/spdk/trace.h 00:09:51.572 TEST_HEADER include/spdk/trace_parser.h 00:09:51.572 TEST_HEADER include/spdk/tree.h 00:09:51.572 TEST_HEADER include/spdk/ublk.h 00:09:51.572 TEST_HEADER include/spdk/util.h 00:09:51.572 TEST_HEADER include/spdk/uuid.h 00:09:51.572 TEST_HEADER include/spdk/version.h 00:09:51.572 TEST_HEADER include/spdk/vfio_user_pci.h 00:09:51.572 TEST_HEADER include/spdk/vfio_user_spec.h 00:09:51.572 TEST_HEADER include/spdk/vhost.h 00:09:51.572 TEST_HEADER include/spdk/vmd.h 00:09:51.572 TEST_HEADER include/spdk/xor.h 00:09:51.572 TEST_HEADER include/spdk/zipf.h 00:09:51.572 CXX test/cpp_headers/accel.o 00:09:51.572 LINK spdk_trace_record 00:09:51.831 LINK mkfs 00:09:51.831 LINK bdev_svc 00:09:51.831 LINK spdk_trace 00:09:51.831 CXX test/cpp_headers/accel_module.o 00:09:51.831 CXX test/cpp_headers/assert.o 00:09:52.088 CXX test/cpp_headers/barrier.o 00:09:52.088 LINK dif 00:09:52.088 LINK bdevio 00:09:52.088 LINK test_dma 00:09:52.088 LINK accel_perf 00:09:52.088 LINK mem_callbacks 00:09:52.088 CXX test/cpp_headers/base64.o 00:09:52.088 CC app/nvmf_tgt/nvmf_main.o 00:09:52.088 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:09:52.088 CC examples/blob/hello_world/hello_blob.o 00:09:52.088 CXX test/cpp_headers/bdev.o 00:09:52.088 CXX test/cpp_headers/bdev_module.o 00:09:52.088 CC examples/bdev/hello_world/hello_bdev.o 00:09:52.346 CC test/env/vtophys/vtophys.o 00:09:52.346 CC app/iscsi_tgt/iscsi_tgt.o 00:09:52.346 CC examples/bdev/bdevperf/bdevperf.o 00:09:52.346 LINK nvmf_tgt 00:09:52.603 CXX test/cpp_headers/bdev_zone.o 00:09:52.603 LINK hello_bdev 00:09:52.603 LINK hello_blob 00:09:52.603 CC test/event/event_perf/event_perf.o 00:09:52.603 LINK vtophys 00:09:52.603 LINK iscsi_tgt 00:09:52.603 CXX test/cpp_headers/bit_array.o 00:09:52.603 LINK nvme_fuzz 00:09:52.860 LINK event_perf 00:09:52.860 CC test/lvol/esnap/esnap.o 00:09:52.860 CXX test/cpp_headers/bit_pool.o 00:09:52.860 CC test/app/histogram_perf/histogram_perf.o 00:09:52.860 CC app/spdk_tgt/spdk_tgt.o 00:09:52.860 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:09:52.860 CC examples/blob/cli/blobcli.o 00:09:53.118 CC app/spdk_lspci/spdk_lspci.o 00:09:53.118 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:09:53.118 CC test/event/reactor/reactor.o 00:09:53.118 LINK histogram_perf 00:09:53.118 CXX test/cpp_headers/blob_bdev.o 00:09:53.118 LINK env_dpdk_post_init 00:09:53.118 LINK spdk_tgt 00:09:53.118 LINK spdk_lspci 00:09:53.118 LINK reactor 00:09:53.377 CC test/app/jsoncat/jsoncat.o 00:09:53.377 CXX test/cpp_headers/blobfs_bdev.o 00:09:53.377 CC test/env/memory/memory_ut.o 00:09:53.377 LINK bdevperf 00:09:53.377 CC app/spdk_nvme_perf/perf.o 00:09:53.377 LINK jsoncat 00:09:53.377 CC test/event/reactor_perf/reactor_perf.o 00:09:53.377 CXX test/cpp_headers/blobfs.o 00:09:53.638 CC test/nvme/aer/aer.o 00:09:53.638 LINK blobcli 00:09:53.638 LINK reactor_perf 00:09:53.638 CXX test/cpp_headers/blob.o 00:09:53.638 CC test/nvme/reset/reset.o 00:09:53.638 CC test/nvme/sgl/sgl.o 00:09:53.896 CXX test/cpp_headers/conf.o 00:09:53.896 LINK aer 00:09:53.896 CC test/event/app_repeat/app_repeat.o 00:09:53.896 CC examples/ioat/perf/perf.o 00:09:53.896 LINK reset 00:09:53.896 CXX test/cpp_headers/config.o 00:09:53.896 CXX test/cpp_headers/cpuset.o 00:09:53.896 LINK sgl 00:09:53.896 LINK app_repeat 00:09:54.154 CC examples/ioat/verify/verify.o 00:09:54.154 LINK ioat_perf 00:09:54.154 CXX test/cpp_headers/crc16.o 00:09:54.154 CC test/app/stub/stub.o 00:09:54.154 LINK memory_ut 00:09:54.154 CC test/nvme/e2edp/nvme_dp.o 00:09:54.413 CXX test/cpp_headers/crc32.o 00:09:54.413 LINK verify 00:09:54.413 CC test/event/scheduler/scheduler.o 00:09:54.413 CC test/nvme/overhead/overhead.o 00:09:54.413 LINK spdk_nvme_perf 00:09:54.413 LINK stub 00:09:54.413 CXX test/cpp_headers/crc64.o 00:09:54.413 CC test/env/pci/pci_ut.o 00:09:54.672 LINK nvme_dp 00:09:54.672 LINK scheduler 00:09:54.672 CXX test/cpp_headers/dif.o 00:09:54.672 CC test/nvme/err_injection/err_injection.o 00:09:54.672 CC app/spdk_nvme_identify/identify.o 00:09:54.672 LINK overhead 00:09:54.672 CC examples/nvme/hello_world/hello_world.o 00:09:54.672 CXX test/cpp_headers/dma.o 00:09:54.931 CC app/spdk_nvme_discover/discovery_aer.o 00:09:54.931 LINK err_injection 00:09:54.931 CXX test/cpp_headers/endian.o 00:09:54.931 CC test/nvme/startup/startup.o 00:09:54.931 LINK iscsi_fuzz 00:09:54.931 LINK hello_world 00:09:54.931 LINK pci_ut 00:09:54.931 CC test/nvme/reserve/reserve.o 00:09:54.931 CXX test/cpp_headers/env_dpdk.o 00:09:54.931 LINK spdk_nvme_discover 00:09:54.931 LINK startup 00:09:55.189 CC test/nvme/simple_copy/simple_copy.o 00:09:55.189 CXX test/cpp_headers/env.o 00:09:55.189 CC examples/nvme/reconnect/reconnect.o 00:09:55.189 LINK reserve 00:09:55.189 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:09:55.189 CC app/spdk_top/spdk_top.o 00:09:55.448 CC app/vhost/vhost.o 00:09:55.448 CXX test/cpp_headers/event.o 00:09:55.448 LINK simple_copy 00:09:55.448 CC app/spdk_dd/spdk_dd.o 00:09:55.448 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:09:55.448 CC examples/nvme/nvme_manage/nvme_manage.o 00:09:55.448 CXX test/cpp_headers/fd_group.o 00:09:55.448 LINK vhost 00:09:55.449 LINK spdk_nvme_identify 00:09:55.449 LINK reconnect 00:09:55.708 CC test/nvme/connect_stress/connect_stress.o 00:09:55.708 CXX test/cpp_headers/fd.o 00:09:55.708 LINK spdk_dd 00:09:55.708 CC test/nvme/boot_partition/boot_partition.o 00:09:55.708 LINK connect_stress 00:09:55.708 CC test/nvme/compliance/nvme_compliance.o 00:09:55.708 CC test/nvme/fused_ordering/fused_ordering.o 00:09:55.708 LINK vhost_fuzz 00:09:55.708 CXX test/cpp_headers/file.o 00:09:55.968 LINK boot_partition 00:09:55.968 CXX test/cpp_headers/ftl.o 00:09:55.968 CXX test/cpp_headers/gpt_spec.o 00:09:55.968 CXX test/cpp_headers/hexlify.o 00:09:55.968 LINK fused_ordering 00:09:55.968 LINK nvme_manage 00:09:55.968 CC test/rpc_client/rpc_client_test.o 00:09:56.227 CXX test/cpp_headers/histogram_data.o 00:09:56.227 LINK nvme_compliance 00:09:56.227 CC examples/nvme/arbitration/arbitration.o 00:09:56.227 CC examples/nvme/hotplug/hotplug.o 00:09:56.227 LINK rpc_client_test 00:09:56.227 CC test/thread/poller_perf/poller_perf.o 00:09:56.227 LINK spdk_top 00:09:56.227 CXX test/cpp_headers/idxd.o 00:09:56.486 CC examples/sock/hello_world/hello_sock.o 00:09:56.486 CC examples/vmd/lsvmd/lsvmd.o 00:09:56.486 LINK poller_perf 00:09:56.486 CXX test/cpp_headers/idxd_spec.o 00:09:56.486 CC test/nvme/doorbell_aers/doorbell_aers.o 00:09:56.486 LINK hotplug 00:09:56.486 CC test/nvme/fdp/fdp.o 00:09:56.486 LINK arbitration 00:09:56.486 LINK lsvmd 00:09:56.745 LINK hello_sock 00:09:56.745 CXX test/cpp_headers/init.o 00:09:56.745 CC app/fio/nvme/fio_plugin.o 00:09:56.745 CC app/fio/bdev/fio_plugin.o 00:09:56.745 LINK doorbell_aers 00:09:56.745 CXX test/cpp_headers/ioat.o 00:09:56.745 CC examples/nvme/cmb_copy/cmb_copy.o 00:09:56.745 CC examples/vmd/led/led.o 00:09:57.005 LINK fdp 00:09:57.005 CC examples/nvme/abort/abort.o 00:09:57.005 CC examples/nvmf/nvmf/nvmf.o 00:09:57.005 CXX test/cpp_headers/ioat_spec.o 00:09:57.005 LINK led 00:09:57.005 LINK cmb_copy 00:09:57.005 CC examples/util/zipf/zipf.o 00:09:57.264 CC test/nvme/cuse/cuse.o 00:09:57.264 CXX test/cpp_headers/iscsi_spec.o 00:09:57.264 LINK nvmf 00:09:57.264 LINK spdk_bdev 00:09:57.264 LINK zipf 00:09:57.264 LINK abort 00:09:57.264 LINK spdk_nvme 00:09:57.264 CXX test/cpp_headers/json.o 00:09:57.264 CC examples/idxd/perf/perf.o 00:09:57.523 CC examples/thread/thread/thread_ex.o 00:09:57.523 CXX test/cpp_headers/jsonrpc.o 00:09:57.523 CXX test/cpp_headers/keyring.o 00:09:57.523 CXX test/cpp_headers/keyring_module.o 00:09:57.523 CC examples/interrupt_tgt/interrupt_tgt.o 00:09:57.523 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:09:57.523 CXX test/cpp_headers/likely.o 00:09:57.523 CXX test/cpp_headers/log.o 00:09:57.783 CXX test/cpp_headers/lvol.o 00:09:57.783 LINK thread 00:09:57.783 LINK interrupt_tgt 00:09:57.783 LINK pmr_persistence 00:09:57.783 CXX test/cpp_headers/memory.o 00:09:57.783 LINK idxd_perf 00:09:57.783 CXX test/cpp_headers/mmio.o 00:09:57.783 CXX test/cpp_headers/nbd.o 00:09:57.783 CXX test/cpp_headers/notify.o 00:09:57.783 CXX test/cpp_headers/nvme.o 00:09:57.783 CXX test/cpp_headers/nvme_intel.o 00:09:57.783 CXX test/cpp_headers/nvme_ocssd.o 00:09:57.783 CXX test/cpp_headers/nvme_ocssd_spec.o 00:09:57.783 CXX test/cpp_headers/nvme_spec.o 00:09:58.042 CXX test/cpp_headers/nvme_zns.o 00:09:58.042 CXX test/cpp_headers/nvmf_cmd.o 00:09:58.042 CXX test/cpp_headers/nvmf_fc_spec.o 00:09:58.042 CXX test/cpp_headers/nvmf.o 00:09:58.042 CXX test/cpp_headers/nvmf_spec.o 00:09:58.042 CXX test/cpp_headers/nvmf_transport.o 00:09:58.042 CXX test/cpp_headers/opal.o 00:09:58.042 LINK esnap 00:09:58.042 CXX test/cpp_headers/opal_spec.o 00:09:58.299 CXX test/cpp_headers/pci_ids.o 00:09:58.299 CXX test/cpp_headers/pipe.o 00:09:58.299 CXX test/cpp_headers/queue.o 00:09:58.299 CXX test/cpp_headers/reduce.o 00:09:58.299 CXX test/cpp_headers/rpc.o 00:09:58.299 CXX test/cpp_headers/scheduler.o 00:09:58.299 CXX test/cpp_headers/scsi.o 00:09:58.299 CXX test/cpp_headers/scsi_spec.o 00:09:58.299 LINK cuse 00:09:58.299 CXX test/cpp_headers/sock.o 00:09:58.299 CXX test/cpp_headers/stdinc.o 00:09:58.558 CXX test/cpp_headers/string.o 00:09:58.558 CXX test/cpp_headers/thread.o 00:09:58.558 CXX test/cpp_headers/trace.o 00:09:58.558 CXX test/cpp_headers/trace_parser.o 00:09:58.558 CXX test/cpp_headers/tree.o 00:09:58.558 CXX test/cpp_headers/ublk.o 00:09:58.558 CXX test/cpp_headers/util.o 00:09:58.558 CXX test/cpp_headers/uuid.o 00:09:58.558 CXX test/cpp_headers/version.o 00:09:58.558 CXX test/cpp_headers/vfio_user_pci.o 00:09:58.558 CXX test/cpp_headers/vfio_user_spec.o 00:09:58.558 CXX test/cpp_headers/vhost.o 00:09:58.817 CXX test/cpp_headers/vmd.o 00:09:58.817 CXX test/cpp_headers/xor.o 00:09:58.817 CXX test/cpp_headers/zipf.o 00:10:04.092 00:10:04.092 real 1m8.890s 00:10:04.092 user 6m9.423s 00:10:04.092 sys 1m46.521s 00:10:04.092 13:52:42 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:10:04.092 13:52:42 -- common/autotest_common.sh@10 -- $ set +x 00:10:04.092 ************************************ 00:10:04.092 END TEST make 00:10:04.092 ************************************ 00:10:04.092 13:52:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:10:04.092 13:52:43 -- pm/common@30 -- $ signal_monitor_resources TERM 00:10:04.092 13:52:43 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:10:04.092 13:52:43 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:04.092 13:52:43 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:10:04.092 13:52:43 -- pm/common@45 -- $ pid=5144 00:10:04.092 13:52:43 -- pm/common@52 -- $ sudo kill -TERM 5144 00:10:04.092 13:52:43 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:04.092 13:52:43 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:10:04.092 13:52:43 -- pm/common@45 -- $ pid=5145 00:10:04.092 13:52:43 -- pm/common@52 -- $ sudo kill -TERM 5145 00:10:04.092 13:52:43 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.092 13:52:43 -- nvmf/common.sh@7 -- # uname -s 00:10:04.092 13:52:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.092 13:52:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.092 13:52:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.092 13:52:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.092 13:52:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.092 13:52:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.092 13:52:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.092 13:52:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.092 13:52:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.092 13:52:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.092 13:52:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:10:04.092 13:52:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:10:04.092 13:52:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.092 13:52:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.092 13:52:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:04.092 13:52:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.092 13:52:43 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.092 13:52:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.092 13:52:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.092 13:52:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.092 13:52:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.092 13:52:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.092 13:52:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.092 13:52:43 -- paths/export.sh@5 -- # export PATH 00:10:04.092 13:52:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.092 13:52:43 -- nvmf/common.sh@47 -- # : 0 00:10:04.092 13:52:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.092 13:52:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.092 13:52:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.092 13:52:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.092 13:52:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.092 13:52:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.092 13:52:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.092 13:52:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.092 13:52:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:10:04.092 13:52:43 -- spdk/autotest.sh@32 -- # uname -s 00:10:04.092 13:52:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:10:04.092 13:52:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:10:04.092 13:52:43 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:04.092 13:52:43 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:10:04.092 13:52:43 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:04.092 13:52:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:10:04.092 13:52:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:10:04.092 13:52:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:10:04.092 13:52:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:10:04.092 13:52:43 -- spdk/autotest.sh@48 -- # udevadm_pid=53993 00:10:04.092 13:52:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:10:04.092 13:52:43 -- pm/common@17 -- # local monitor 00:10:04.092 13:52:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:04.092 13:52:43 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=53995 00:10:04.092 13:52:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:04.092 13:52:43 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=53998 00:10:04.093 13:52:43 -- pm/common@26 -- # sleep 1 00:10:04.093 13:52:43 -- pm/common@21 -- # date +%s 00:10:04.093 13:52:43 -- pm/common@21 -- # date +%s 00:10:04.093 13:52:43 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1714139563 00:10:04.093 13:52:43 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1714139563 00:10:04.093 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1714139563_collect-vmstat.pm.log 00:10:04.093 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1714139563_collect-cpu-load.pm.log 00:10:05.029 13:52:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:10:05.029 13:52:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:10:05.029 13:52:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:05.029 13:52:44 -- common/autotest_common.sh@10 -- # set +x 00:10:05.029 13:52:44 -- spdk/autotest.sh@59 -- # create_test_list 00:10:05.029 13:52:44 -- common/autotest_common.sh@734 -- # xtrace_disable 00:10:05.029 13:52:44 -- common/autotest_common.sh@10 -- # set +x 00:10:05.029 13:52:44 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:10:05.029 13:52:44 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:10:05.029 13:52:44 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:10:05.029 13:52:44 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:10:05.029 13:52:44 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:10:05.029 13:52:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:10:05.029 13:52:44 -- common/autotest_common.sh@1441 -- # uname 00:10:05.029 13:52:44 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:10:05.029 13:52:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:10:05.029 13:52:44 -- common/autotest_common.sh@1461 -- # uname 00:10:05.029 13:52:44 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:10:05.029 13:52:44 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:10:05.029 13:52:44 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:10:05.029 13:52:44 -- spdk/autotest.sh@72 -- # hash lcov 00:10:05.029 13:52:44 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:10:05.029 13:52:44 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:10:05.029 --rc lcov_branch_coverage=1 00:10:05.029 --rc lcov_function_coverage=1 00:10:05.029 --rc genhtml_branch_coverage=1 00:10:05.029 --rc genhtml_function_coverage=1 00:10:05.029 --rc genhtml_legend=1 00:10:05.029 --rc geninfo_all_blocks=1 00:10:05.029 ' 00:10:05.029 13:52:44 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:10:05.029 --rc lcov_branch_coverage=1 00:10:05.029 --rc lcov_function_coverage=1 00:10:05.029 --rc genhtml_branch_coverage=1 00:10:05.029 --rc genhtml_function_coverage=1 00:10:05.029 --rc genhtml_legend=1 00:10:05.029 --rc geninfo_all_blocks=1 00:10:05.029 ' 00:10:05.029 13:52:44 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:10:05.029 --rc lcov_branch_coverage=1 00:10:05.029 --rc lcov_function_coverage=1 00:10:05.029 --rc genhtml_branch_coverage=1 00:10:05.029 --rc genhtml_function_coverage=1 00:10:05.029 --rc genhtml_legend=1 00:10:05.029 --rc geninfo_all_blocks=1 00:10:05.029 --no-external' 00:10:05.029 13:52:44 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:10:05.029 --rc lcov_branch_coverage=1 00:10:05.029 --rc lcov_function_coverage=1 00:10:05.029 --rc genhtml_branch_coverage=1 00:10:05.029 --rc genhtml_function_coverage=1 00:10:05.029 --rc genhtml_legend=1 00:10:05.029 --rc geninfo_all_blocks=1 00:10:05.029 --no-external' 00:10:05.029 13:52:44 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:10:05.029 lcov: LCOV version 1.14 00:10:05.029 13:52:44 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:10:13.218 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:10:13.218 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:10:13.218 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:10:13.218 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:10:13.218 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:10:13.218 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:10:19.800 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:10:19.800 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:10:32.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:10:32.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:10:32.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:10:32.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:10:32.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:10:32.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:10:32.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:10:32.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:10:32.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:10:32.042 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:10:32.042 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:10:32.043 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:10:32.043 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:10:32.044 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:10:32.044 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:10:34.607 13:53:14 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:10:34.607 13:53:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:34.607 13:53:14 -- common/autotest_common.sh@10 -- # set +x 00:10:34.607 13:53:14 -- spdk/autotest.sh@91 -- # rm -f 00:10:34.607 13:53:14 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:35.545 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:35.545 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:10:35.545 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:10:35.545 13:53:15 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:10:35.545 13:53:15 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:10:35.545 13:53:15 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:10:35.545 13:53:15 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:10:35.545 13:53:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:35.545 13:53:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:10:35.545 13:53:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:10:35.545 13:53:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:35.545 13:53:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:35.545 13:53:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:35.545 13:53:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:10:35.545 13:53:15 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:10:35.545 13:53:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:35.545 13:53:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:35.545 13:53:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:35.545 13:53:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:10:35.545 13:53:15 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:10:35.545 13:53:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:10:35.545 13:53:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:35.545 13:53:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:35.545 13:53:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:10:35.545 13:53:15 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:10:35.545 13:53:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:10:35.545 13:53:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:35.545 13:53:15 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:10:35.545 13:53:15 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:35.545 13:53:15 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:35.545 13:53:15 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:10:35.545 13:53:15 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:10:35.545 13:53:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:10:35.545 No valid GPT data, bailing 00:10:35.545 13:53:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:10:35.805 13:53:15 -- scripts/common.sh@391 -- # pt= 00:10:35.805 13:53:15 -- scripts/common.sh@392 -- # return 1 00:10:35.805 13:53:15 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:10:35.805 1+0 records in 00:10:35.805 1+0 records out 00:10:35.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511386 s, 205 MB/s 00:10:35.805 13:53:15 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:35.805 13:53:15 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:35.805 13:53:15 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:10:35.805 13:53:15 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:10:35.805 13:53:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:10:35.805 No valid GPT data, bailing 00:10:35.805 13:53:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:10:35.805 13:53:15 -- scripts/common.sh@391 -- # pt= 00:10:35.805 13:53:15 -- scripts/common.sh@392 -- # return 1 00:10:35.805 13:53:15 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:10:35.805 1+0 records in 00:10:35.805 1+0 records out 00:10:35.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00403243 s, 260 MB/s 00:10:35.805 13:53:15 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:35.805 13:53:15 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:35.805 13:53:15 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:10:35.805 13:53:15 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:10:35.805 13:53:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:10:35.805 No valid GPT data, bailing 00:10:35.805 13:53:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:10:35.805 13:53:15 -- scripts/common.sh@391 -- # pt= 00:10:35.805 13:53:15 -- scripts/common.sh@392 -- # return 1 00:10:35.805 13:53:15 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:10:35.805 1+0 records in 00:10:35.805 1+0 records out 00:10:35.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00597957 s, 175 MB/s 00:10:35.805 13:53:15 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:10:35.805 13:53:15 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:10:35.805 13:53:15 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:10:35.805 13:53:15 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:10:35.805 13:53:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:10:35.805 No valid GPT data, bailing 00:10:35.805 13:53:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:10:35.805 13:53:15 -- scripts/common.sh@391 -- # pt= 00:10:35.805 13:53:15 -- scripts/common.sh@392 -- # return 1 00:10:35.805 13:53:15 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:10:35.805 1+0 records in 00:10:35.805 1+0 records out 00:10:35.805 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00413601 s, 254 MB/s 00:10:35.805 13:53:15 -- spdk/autotest.sh@118 -- # sync 00:10:36.064 13:53:15 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:10:36.064 13:53:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:10:36.064 13:53:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:10:38.601 13:53:17 -- spdk/autotest.sh@124 -- # uname -s 00:10:38.601 13:53:17 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:10:38.601 13:53:17 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:10:38.601 13:53:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:38.601 13:53:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:38.601 13:53:17 -- common/autotest_common.sh@10 -- # set +x 00:10:38.601 ************************************ 00:10:38.601 START TEST setup.sh 00:10:38.601 ************************************ 00:10:38.601 13:53:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:10:38.601 * Looking for test storage... 00:10:38.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:38.601 13:53:18 -- setup/test-setup.sh@10 -- # uname -s 00:10:38.601 13:53:18 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:10:38.601 13:53:18 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:10:38.601 13:53:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:38.601 13:53:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:38.601 13:53:18 -- common/autotest_common.sh@10 -- # set +x 00:10:38.601 ************************************ 00:10:38.601 START TEST acl 00:10:38.601 ************************************ 00:10:38.601 13:53:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:10:38.891 * Looking for test storage... 00:10:38.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:38.891 13:53:18 -- setup/acl.sh@10 -- # get_zoned_devs 00:10:38.891 13:53:18 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:10:38.891 13:53:18 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:10:38.891 13:53:18 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:10:38.891 13:53:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:38.891 13:53:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:10:38.891 13:53:18 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:10:38.891 13:53:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:38.891 13:53:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:38.891 13:53:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:38.891 13:53:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:10:38.891 13:53:18 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:10:38.891 13:53:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:38.891 13:53:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:38.891 13:53:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:38.891 13:53:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:10:38.891 13:53:18 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:10:38.891 13:53:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:10:38.891 13:53:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:38.891 13:53:18 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:38.891 13:53:18 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:10:38.891 13:53:18 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:10:38.891 13:53:18 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:10:38.891 13:53:18 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:38.891 13:53:18 -- setup/acl.sh@12 -- # devs=() 00:10:38.891 13:53:18 -- setup/acl.sh@12 -- # declare -a devs 00:10:38.891 13:53:18 -- setup/acl.sh@13 -- # drivers=() 00:10:38.891 13:53:18 -- setup/acl.sh@13 -- # declare -A drivers 00:10:38.891 13:53:18 -- setup/acl.sh@51 -- # setup reset 00:10:38.891 13:53:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:38.891 13:53:18 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:39.865 13:53:19 -- setup/acl.sh@52 -- # collect_setup_devs 00:10:39.865 13:53:19 -- setup/acl.sh@16 -- # local dev driver 00:10:39.865 13:53:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:39.865 13:53:19 -- setup/acl.sh@15 -- # setup output status 00:10:39.865 13:53:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:39.865 13:53:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:10:40.431 13:53:19 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:10:40.431 13:53:19 -- setup/acl.sh@19 -- # continue 00:10:40.431 13:53:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:40.431 Hugepages 00:10:40.431 node hugesize free / total 00:10:40.431 13:53:19 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:10:40.431 13:53:19 -- setup/acl.sh@19 -- # continue 00:10:40.431 13:53:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:40.431 00:10:40.431 Type BDF Vendor Device NUMA Driver Device Block devices 00:10:40.431 13:53:19 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:10:40.431 13:53:19 -- setup/acl.sh@19 -- # continue 00:10:40.431 13:53:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:40.431 13:53:19 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:10:40.431 13:53:19 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:10:40.431 13:53:19 -- setup/acl.sh@20 -- # continue 00:10:40.431 13:53:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:40.431 13:53:20 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:10:40.431 13:53:20 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:10:40.431 13:53:20 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:10:40.431 13:53:20 -- setup/acl.sh@22 -- # devs+=("$dev") 00:10:40.431 13:53:20 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:10:40.431 13:53:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:40.690 13:53:20 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:10:40.690 13:53:20 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:10:40.690 13:53:20 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:10:40.690 13:53:20 -- setup/acl.sh@22 -- # devs+=("$dev") 00:10:40.690 13:53:20 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:10:40.690 13:53:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:10:40.690 13:53:20 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:10:40.690 13:53:20 -- setup/acl.sh@54 -- # run_test denied denied 00:10:40.690 13:53:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:40.690 13:53:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:40.690 13:53:20 -- common/autotest_common.sh@10 -- # set +x 00:10:40.690 ************************************ 00:10:40.690 START TEST denied 00:10:40.690 ************************************ 00:10:40.690 13:53:20 -- common/autotest_common.sh@1111 -- # denied 00:10:40.690 13:53:20 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:10:40.690 13:53:20 -- setup/acl.sh@38 -- # setup output config 00:10:40.690 13:53:20 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:10:40.690 13:53:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:40.690 13:53:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:41.627 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:10:41.627 13:53:21 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:10:41.627 13:53:21 -- setup/acl.sh@28 -- # local dev driver 00:10:41.627 13:53:21 -- setup/acl.sh@30 -- # for dev in "$@" 00:10:41.627 13:53:21 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:10:41.627 13:53:21 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:10:41.627 13:53:21 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:10:41.627 13:53:21 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:10:41.627 13:53:21 -- setup/acl.sh@41 -- # setup reset 00:10:41.627 13:53:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:41.627 13:53:21 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:42.563 00:10:42.563 real 0m1.771s 00:10:42.563 user 0m0.619s 00:10:42.563 sys 0m1.124s 00:10:42.563 13:53:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:42.563 13:53:22 -- common/autotest_common.sh@10 -- # set +x 00:10:42.563 ************************************ 00:10:42.563 END TEST denied 00:10:42.563 ************************************ 00:10:42.563 13:53:22 -- setup/acl.sh@55 -- # run_test allowed allowed 00:10:42.563 13:53:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:42.563 13:53:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:42.563 13:53:22 -- common/autotest_common.sh@10 -- # set +x 00:10:42.563 ************************************ 00:10:42.563 START TEST allowed 00:10:42.563 ************************************ 00:10:42.563 13:53:22 -- common/autotest_common.sh@1111 -- # allowed 00:10:42.563 13:53:22 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:10:42.563 13:53:22 -- setup/acl.sh@45 -- # setup output config 00:10:42.563 13:53:22 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:10:42.563 13:53:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:42.563 13:53:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:43.939 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:43.939 13:53:23 -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:10:43.939 13:53:23 -- setup/acl.sh@28 -- # local dev driver 00:10:43.939 13:53:23 -- setup/acl.sh@30 -- # for dev in "$@" 00:10:43.939 13:53:23 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:10:43.939 13:53:23 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:10:43.939 13:53:23 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:10:43.939 13:53:23 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:10:43.939 13:53:23 -- setup/acl.sh@48 -- # setup reset 00:10:43.939 13:53:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:43.939 13:53:23 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:44.505 00:10:44.505 real 0m1.913s 00:10:44.505 user 0m0.769s 00:10:44.505 sys 0m1.142s 00:10:44.505 13:53:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:44.505 13:53:24 -- common/autotest_common.sh@10 -- # set +x 00:10:44.505 ************************************ 00:10:44.505 END TEST allowed 00:10:44.505 ************************************ 00:10:44.505 00:10:44.505 real 0m5.903s 00:10:44.505 user 0m2.287s 00:10:44.505 sys 0m3.572s 00:10:44.505 13:53:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:44.505 13:53:24 -- common/autotest_common.sh@10 -- # set +x 00:10:44.505 ************************************ 00:10:44.505 END TEST acl 00:10:44.505 ************************************ 00:10:44.764 13:53:24 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:10:44.764 13:53:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:44.764 13:53:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:44.764 13:53:24 -- common/autotest_common.sh@10 -- # set +x 00:10:44.764 ************************************ 00:10:44.764 START TEST hugepages 00:10:44.764 ************************************ 00:10:44.764 13:53:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:10:44.764 * Looking for test storage... 00:10:44.764 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:44.764 13:53:24 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:10:44.764 13:53:24 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:10:44.764 13:53:24 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:10:44.764 13:53:24 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:10:44.764 13:53:24 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:10:44.764 13:53:24 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:10:44.764 13:53:24 -- setup/common.sh@17 -- # local get=Hugepagesize 00:10:44.764 13:53:24 -- setup/common.sh@18 -- # local node= 00:10:44.764 13:53:24 -- setup/common.sh@19 -- # local var val 00:10:44.764 13:53:24 -- setup/common.sh@20 -- # local mem_f mem 00:10:44.764 13:53:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:44.764 13:53:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:44.764 13:53:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:44.764 13:53:24 -- setup/common.sh@28 -- # mapfile -t mem 00:10:44.764 13:53:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:44.764 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 5270532 kB' 'MemAvailable: 7387388 kB' 'Buffers: 2436 kB' 'Cached: 2326656 kB' 'SwapCached: 0 kB' 'Active: 882184 kB' 'Inactive: 1560176 kB' 'Active(anon): 123756 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560176 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 716 kB' 'Writeback: 0 kB' 'AnonPages: 114928 kB' 'Mapped: 48800 kB' 'Shmem: 10488 kB' 'KReclaimable: 70828 kB' 'Slab: 153556 kB' 'SReclaimable: 70828 kB' 'SUnreclaim: 82728 kB' 'KernelStack: 6364 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 346932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.765 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.765 13:53:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # continue 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # IFS=': ' 00:10:44.766 13:53:24 -- setup/common.sh@31 -- # read -r var val _ 00:10:44.766 13:53:24 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:10:44.766 13:53:24 -- setup/common.sh@33 -- # echo 2048 00:10:44.766 13:53:24 -- setup/common.sh@33 -- # return 0 00:10:44.766 13:53:24 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:10:44.766 13:53:24 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:10:44.766 13:53:24 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:10:44.766 13:53:24 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:10:44.766 13:53:24 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:10:44.766 13:53:24 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:10:44.766 13:53:24 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:10:44.766 13:53:24 -- setup/hugepages.sh@207 -- # get_nodes 00:10:44.766 13:53:24 -- setup/hugepages.sh@27 -- # local node 00:10:44.766 13:53:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:44.766 13:53:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:10:44.766 13:53:24 -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:44.766 13:53:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:44.766 13:53:24 -- setup/hugepages.sh@208 -- # clear_hp 00:10:44.766 13:53:24 -- setup/hugepages.sh@37 -- # local node hp 00:10:44.766 13:53:24 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:10:44.766 13:53:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:44.766 13:53:24 -- setup/hugepages.sh@41 -- # echo 0 00:10:44.766 13:53:24 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:44.766 13:53:24 -- setup/hugepages.sh@41 -- # echo 0 00:10:45.025 13:53:24 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:10:45.025 13:53:24 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:10:45.025 13:53:24 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:10:45.025 13:53:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:45.025 13:53:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:45.025 13:53:24 -- common/autotest_common.sh@10 -- # set +x 00:10:45.025 ************************************ 00:10:45.025 START TEST default_setup 00:10:45.025 ************************************ 00:10:45.025 13:53:24 -- common/autotest_common.sh@1111 -- # default_setup 00:10:45.025 13:53:24 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:10:45.025 13:53:24 -- setup/hugepages.sh@49 -- # local size=2097152 00:10:45.025 13:53:24 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:10:45.025 13:53:24 -- setup/hugepages.sh@51 -- # shift 00:10:45.025 13:53:24 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:10:45.025 13:53:24 -- setup/hugepages.sh@52 -- # local node_ids 00:10:45.025 13:53:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:45.026 13:53:24 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:10:45.026 13:53:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:10:45.026 13:53:24 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:10:45.026 13:53:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:10:45.026 13:53:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:10:45.026 13:53:24 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:45.026 13:53:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:45.026 13:53:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:45.026 13:53:24 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:10:45.026 13:53:24 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:10:45.026 13:53:24 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:10:45.026 13:53:24 -- setup/hugepages.sh@73 -- # return 0 00:10:45.026 13:53:24 -- setup/hugepages.sh@137 -- # setup output 00:10:45.026 13:53:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:45.026 13:53:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:45.966 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:45.966 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:45.966 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:45.966 13:53:25 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:10:45.966 13:53:25 -- setup/hugepages.sh@89 -- # local node 00:10:45.966 13:53:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:10:45.966 13:53:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:10:45.966 13:53:25 -- setup/hugepages.sh@92 -- # local surp 00:10:45.966 13:53:25 -- setup/hugepages.sh@93 -- # local resv 00:10:45.966 13:53:25 -- setup/hugepages.sh@94 -- # local anon 00:10:45.966 13:53:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:45.966 13:53:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:45.966 13:53:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:45.966 13:53:25 -- setup/common.sh@18 -- # local node= 00:10:45.966 13:53:25 -- setup/common.sh@19 -- # local var val 00:10:45.966 13:53:25 -- setup/common.sh@20 -- # local mem_f mem 00:10:45.966 13:53:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:45.966 13:53:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:45.966 13:53:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:45.966 13:53:25 -- setup/common.sh@28 -- # mapfile -t mem 00:10:45.966 13:53:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.966 13:53:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7367752 kB' 'MemAvailable: 9484424 kB' 'Buffers: 2436 kB' 'Cached: 2326648 kB' 'SwapCached: 0 kB' 'Active: 891772 kB' 'Inactive: 1560192 kB' 'Active(anon): 133344 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560192 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 896 kB' 'Writeback: 0 kB' 'AnonPages: 124464 kB' 'Mapped: 49012 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153312 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82884 kB' 'KernelStack: 6384 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.966 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.966 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:45.967 13:53:25 -- setup/common.sh@33 -- # echo 0 00:10:45.967 13:53:25 -- setup/common.sh@33 -- # return 0 00:10:45.967 13:53:25 -- setup/hugepages.sh@97 -- # anon=0 00:10:45.967 13:53:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:45.967 13:53:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:45.967 13:53:25 -- setup/common.sh@18 -- # local node= 00:10:45.967 13:53:25 -- setup/common.sh@19 -- # local var val 00:10:45.967 13:53:25 -- setup/common.sh@20 -- # local mem_f mem 00:10:45.967 13:53:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:45.967 13:53:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:45.967 13:53:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:45.967 13:53:25 -- setup/common.sh@28 -- # mapfile -t mem 00:10:45.967 13:53:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7367752 kB' 'MemAvailable: 9484424 kB' 'Buffers: 2436 kB' 'Cached: 2326648 kB' 'SwapCached: 0 kB' 'Active: 891540 kB' 'Inactive: 1560192 kB' 'Active(anon): 133112 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560192 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 908 kB' 'Writeback: 0 kB' 'AnonPages: 124208 kB' 'Mapped: 48824 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153308 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82880 kB' 'KernelStack: 6368 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.967 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.967 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.968 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.968 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:45.969 13:53:25 -- setup/common.sh@33 -- # echo 0 00:10:45.969 13:53:25 -- setup/common.sh@33 -- # return 0 00:10:45.969 13:53:25 -- setup/hugepages.sh@99 -- # surp=0 00:10:45.969 13:53:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:45.969 13:53:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:45.969 13:53:25 -- setup/common.sh@18 -- # local node= 00:10:45.969 13:53:25 -- setup/common.sh@19 -- # local var val 00:10:45.969 13:53:25 -- setup/common.sh@20 -- # local mem_f mem 00:10:45.969 13:53:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:45.969 13:53:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:45.969 13:53:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:45.969 13:53:25 -- setup/common.sh@28 -- # mapfile -t mem 00:10:45.969 13:53:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7367752 kB' 'MemAvailable: 9484424 kB' 'Buffers: 2436 kB' 'Cached: 2326648 kB' 'SwapCached: 0 kB' 'Active: 891740 kB' 'Inactive: 1560192 kB' 'Active(anon): 133312 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560192 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 908 kB' 'Writeback: 0 kB' 'AnonPages: 124436 kB' 'Mapped: 48824 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153304 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82876 kB' 'KernelStack: 6368 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.969 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.969 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:45.970 13:53:25 -- setup/common.sh@33 -- # echo 0 00:10:45.970 13:53:25 -- setup/common.sh@33 -- # return 0 00:10:45.970 13:53:25 -- setup/hugepages.sh@100 -- # resv=0 00:10:45.970 13:53:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:45.970 nr_hugepages=1024 00:10:45.970 resv_hugepages=0 00:10:45.970 13:53:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:45.970 surplus_hugepages=0 00:10:45.970 13:53:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:45.970 anon_hugepages=0 00:10:45.970 13:53:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:45.970 13:53:25 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:45.970 13:53:25 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:45.970 13:53:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:45.970 13:53:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:45.970 13:53:25 -- setup/common.sh@18 -- # local node= 00:10:45.970 13:53:25 -- setup/common.sh@19 -- # local var val 00:10:45.970 13:53:25 -- setup/common.sh@20 -- # local mem_f mem 00:10:45.970 13:53:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:45.970 13:53:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:45.970 13:53:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:45.970 13:53:25 -- setup/common.sh@28 -- # mapfile -t mem 00:10:45.970 13:53:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:45.970 13:53:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7367752 kB' 'MemAvailable: 9484424 kB' 'Buffers: 2436 kB' 'Cached: 2326648 kB' 'SwapCached: 0 kB' 'Active: 891540 kB' 'Inactive: 1560192 kB' 'Active(anon): 133112 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560192 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 908 kB' 'Writeback: 0 kB' 'AnonPages: 124240 kB' 'Mapped: 48824 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153304 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82876 kB' 'KernelStack: 6352 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.970 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.970 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # continue 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:45.971 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:45.971 13:53:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.231 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.231 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.231 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.232 13:53:25 -- setup/common.sh@33 -- # echo 1024 00:10:46.232 13:53:25 -- setup/common.sh@33 -- # return 0 00:10:46.232 13:53:25 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:46.232 13:53:25 -- setup/hugepages.sh@112 -- # get_nodes 00:10:46.232 13:53:25 -- setup/hugepages.sh@27 -- # local node 00:10:46.232 13:53:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:46.232 13:53:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:46.232 13:53:25 -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:46.232 13:53:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:46.232 13:53:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:46.232 13:53:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:46.232 13:53:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:46.232 13:53:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:46.232 13:53:25 -- setup/common.sh@18 -- # local node=0 00:10:46.232 13:53:25 -- setup/common.sh@19 -- # local var val 00:10:46.232 13:53:25 -- setup/common.sh@20 -- # local mem_f mem 00:10:46.232 13:53:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:46.232 13:53:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:46.232 13:53:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:46.232 13:53:25 -- setup/common.sh@28 -- # mapfile -t mem 00:10:46.232 13:53:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7367752 kB' 'MemUsed: 4874216 kB' 'SwapCached: 0 kB' 'Active: 891468 kB' 'Inactive: 1560192 kB' 'Active(anon): 133040 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560192 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 908 kB' 'Writeback: 0 kB' 'FilePages: 2329084 kB' 'Mapped: 48824 kB' 'AnonPages: 124168 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70428 kB' 'Slab: 153304 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82876 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.232 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.232 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # continue 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.233 13:53:25 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.233 13:53:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.233 13:53:25 -- setup/common.sh@33 -- # echo 0 00:10:46.233 13:53:25 -- setup/common.sh@33 -- # return 0 00:10:46.233 13:53:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:46.233 13:53:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:46.233 13:53:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:46.233 13:53:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:46.233 13:53:25 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:46.233 node0=1024 expecting 1024 00:10:46.233 13:53:25 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:46.233 00:10:46.233 real 0m1.132s 00:10:46.233 user 0m0.479s 00:10:46.233 sys 0m0.628s 00:10:46.233 13:53:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:46.233 13:53:25 -- common/autotest_common.sh@10 -- # set +x 00:10:46.233 ************************************ 00:10:46.233 END TEST default_setup 00:10:46.233 ************************************ 00:10:46.233 13:53:25 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:10:46.233 13:53:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:46.233 13:53:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:46.233 13:53:25 -- common/autotest_common.sh@10 -- # set +x 00:10:46.233 ************************************ 00:10:46.233 START TEST per_node_1G_alloc 00:10:46.233 ************************************ 00:10:46.233 13:53:25 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:10:46.233 13:53:25 -- setup/hugepages.sh@143 -- # local IFS=, 00:10:46.233 13:53:25 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:10:46.233 13:53:25 -- setup/hugepages.sh@49 -- # local size=1048576 00:10:46.233 13:53:25 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:10:46.233 13:53:25 -- setup/hugepages.sh@51 -- # shift 00:10:46.233 13:53:25 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:10:46.233 13:53:25 -- setup/hugepages.sh@52 -- # local node_ids 00:10:46.233 13:53:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:46.233 13:53:25 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:10:46.233 13:53:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:10:46.233 13:53:25 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:10:46.233 13:53:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:10:46.233 13:53:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:10:46.233 13:53:25 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:46.233 13:53:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:46.233 13:53:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:46.233 13:53:25 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:10:46.233 13:53:25 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:10:46.233 13:53:25 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:10:46.233 13:53:25 -- setup/hugepages.sh@73 -- # return 0 00:10:46.233 13:53:25 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:10:46.233 13:53:25 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:10:46.233 13:53:25 -- setup/hugepages.sh@146 -- # setup output 00:10:46.233 13:53:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:46.233 13:53:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:46.805 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:46.805 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:46.805 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:46.805 13:53:26 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:10:46.805 13:53:26 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:10:46.805 13:53:26 -- setup/hugepages.sh@89 -- # local node 00:10:46.805 13:53:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:10:46.805 13:53:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:10:46.805 13:53:26 -- setup/hugepages.sh@92 -- # local surp 00:10:46.805 13:53:26 -- setup/hugepages.sh@93 -- # local resv 00:10:46.805 13:53:26 -- setup/hugepages.sh@94 -- # local anon 00:10:46.805 13:53:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:46.805 13:53:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:46.805 13:53:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:46.805 13:53:26 -- setup/common.sh@18 -- # local node= 00:10:46.805 13:53:26 -- setup/common.sh@19 -- # local var val 00:10:46.805 13:53:26 -- setup/common.sh@20 -- # local mem_f mem 00:10:46.805 13:53:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:46.805 13:53:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:46.805 13:53:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:46.805 13:53:26 -- setup/common.sh@28 -- # mapfile -t mem 00:10:46.805 13:53:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.805 13:53:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8415660 kB' 'MemAvailable: 10532336 kB' 'Buffers: 2436 kB' 'Cached: 2326652 kB' 'SwapCached: 0 kB' 'Active: 891844 kB' 'Inactive: 1560196 kB' 'Active(anon): 133416 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1096 kB' 'Writeback: 0 kB' 'AnonPages: 124372 kB' 'Mapped: 48976 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153296 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82868 kB' 'KernelStack: 6352 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.805 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.805 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:46.806 13:53:26 -- setup/common.sh@33 -- # echo 0 00:10:46.806 13:53:26 -- setup/common.sh@33 -- # return 0 00:10:46.806 13:53:26 -- setup/hugepages.sh@97 -- # anon=0 00:10:46.806 13:53:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:46.806 13:53:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:46.806 13:53:26 -- setup/common.sh@18 -- # local node= 00:10:46.806 13:53:26 -- setup/common.sh@19 -- # local var val 00:10:46.806 13:53:26 -- setup/common.sh@20 -- # local mem_f mem 00:10:46.806 13:53:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:46.806 13:53:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:46.806 13:53:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:46.806 13:53:26 -- setup/common.sh@28 -- # mapfile -t mem 00:10:46.806 13:53:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8415660 kB' 'MemAvailable: 10532336 kB' 'Buffers: 2436 kB' 'Cached: 2326652 kB' 'SwapCached: 0 kB' 'Active: 891712 kB' 'Inactive: 1560196 kB' 'Active(anon): 133284 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1096 kB' 'Writeback: 0 kB' 'AnonPages: 124440 kB' 'Mapped: 48848 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153296 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82868 kB' 'KernelStack: 6384 kB' 'PageTables: 4420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.806 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.806 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.807 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.807 13:53:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:46.807 13:53:26 -- setup/common.sh@33 -- # echo 0 00:10:46.807 13:53:26 -- setup/common.sh@33 -- # return 0 00:10:46.807 13:53:26 -- setup/hugepages.sh@99 -- # surp=0 00:10:46.808 13:53:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:46.808 13:53:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:46.808 13:53:26 -- setup/common.sh@18 -- # local node= 00:10:46.808 13:53:26 -- setup/common.sh@19 -- # local var val 00:10:46.808 13:53:26 -- setup/common.sh@20 -- # local mem_f mem 00:10:46.808 13:53:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:46.808 13:53:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:46.808 13:53:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:46.808 13:53:26 -- setup/common.sh@28 -- # mapfile -t mem 00:10:46.808 13:53:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8415912 kB' 'MemAvailable: 10532588 kB' 'Buffers: 2436 kB' 'Cached: 2326652 kB' 'SwapCached: 0 kB' 'Active: 891560 kB' 'Inactive: 1560196 kB' 'Active(anon): 133132 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1096 kB' 'Writeback: 0 kB' 'AnonPages: 124336 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153296 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82868 kB' 'KernelStack: 6368 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.808 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.808 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:46.809 13:53:26 -- setup/common.sh@33 -- # echo 0 00:10:46.809 13:53:26 -- setup/common.sh@33 -- # return 0 00:10:46.809 13:53:26 -- setup/hugepages.sh@100 -- # resv=0 00:10:46.809 13:53:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:10:46.809 nr_hugepages=512 00:10:46.809 resv_hugepages=0 00:10:46.809 13:53:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:46.809 surplus_hugepages=0 00:10:46.809 13:53:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:46.809 anon_hugepages=0 00:10:46.809 13:53:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:46.809 13:53:26 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:10:46.809 13:53:26 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:10:46.809 13:53:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:46.809 13:53:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:46.809 13:53:26 -- setup/common.sh@18 -- # local node= 00:10:46.809 13:53:26 -- setup/common.sh@19 -- # local var val 00:10:46.809 13:53:26 -- setup/common.sh@20 -- # local mem_f mem 00:10:46.809 13:53:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:46.809 13:53:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:46.809 13:53:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:46.809 13:53:26 -- setup/common.sh@28 -- # mapfile -t mem 00:10:46.809 13:53:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:46.809 13:53:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8415912 kB' 'MemAvailable: 10532588 kB' 'Buffers: 2436 kB' 'Cached: 2326652 kB' 'SwapCached: 0 kB' 'Active: 891572 kB' 'Inactive: 1560196 kB' 'Active(anon): 133144 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1096 kB' 'Writeback: 0 kB' 'AnonPages: 124340 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153296 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82868 kB' 'KernelStack: 6368 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.809 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.809 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:46.810 13:53:26 -- setup/common.sh@32 -- # continue 00:10:46.810 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.070 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.070 13:53:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.070 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.070 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.070 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.070 13:53:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.070 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.070 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.070 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.070 13:53:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.070 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.070 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.070 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.070 13:53:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.071 13:53:26 -- setup/common.sh@33 -- # echo 512 00:10:47.071 13:53:26 -- setup/common.sh@33 -- # return 0 00:10:47.071 13:53:26 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:10:47.071 13:53:26 -- setup/hugepages.sh@112 -- # get_nodes 00:10:47.071 13:53:26 -- setup/hugepages.sh@27 -- # local node 00:10:47.071 13:53:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:47.071 13:53:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:10:47.071 13:53:26 -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:47.071 13:53:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:47.071 13:53:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:47.071 13:53:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:47.071 13:53:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:47.071 13:53:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:47.071 13:53:26 -- setup/common.sh@18 -- # local node=0 00:10:47.071 13:53:26 -- setup/common.sh@19 -- # local var val 00:10:47.071 13:53:26 -- setup/common.sh@20 -- # local mem_f mem 00:10:47.071 13:53:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.071 13:53:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:47.071 13:53:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:47.071 13:53:26 -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.071 13:53:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8415660 kB' 'MemUsed: 3826308 kB' 'SwapCached: 0 kB' 'Active: 892032 kB' 'Inactive: 1560196 kB' 'Active(anon): 133604 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560196 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1096 kB' 'Writeback: 0 kB' 'FilePages: 2329088 kB' 'Mapped: 49880 kB' 'AnonPages: 124828 kB' 'Shmem: 10464 kB' 'KernelStack: 6400 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70428 kB' 'Slab: 153296 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.071 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.071 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # continue 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.072 13:53:26 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.072 13:53:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.072 13:53:26 -- setup/common.sh@33 -- # echo 0 00:10:47.072 13:53:26 -- setup/common.sh@33 -- # return 0 00:10:47.072 13:53:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:47.072 13:53:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:47.072 13:53:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:47.072 13:53:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:47.072 13:53:26 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:10:47.072 node0=512 expecting 512 00:10:47.072 13:53:26 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:10:47.072 00:10:47.072 real 0m0.716s 00:10:47.072 user 0m0.349s 00:10:47.072 sys 0m0.389s 00:10:47.072 13:53:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:47.072 13:53:26 -- common/autotest_common.sh@10 -- # set +x 00:10:47.072 ************************************ 00:10:47.072 END TEST per_node_1G_alloc 00:10:47.072 ************************************ 00:10:47.072 13:53:26 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:10:47.072 13:53:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:47.072 13:53:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.072 13:53:26 -- common/autotest_common.sh@10 -- # set +x 00:10:47.072 ************************************ 00:10:47.072 START TEST even_2G_alloc 00:10:47.072 ************************************ 00:10:47.072 13:53:26 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:10:47.072 13:53:26 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:10:47.072 13:53:26 -- setup/hugepages.sh@49 -- # local size=2097152 00:10:47.072 13:53:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:10:47.072 13:53:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:47.072 13:53:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:10:47.072 13:53:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:10:47.072 13:53:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:47.072 13:53:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:10:47.072 13:53:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:10:47.072 13:53:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:47.072 13:53:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:47.072 13:53:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:47.072 13:53:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:47.072 13:53:26 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:10:47.072 13:53:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:47.072 13:53:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:10:47.072 13:53:26 -- setup/hugepages.sh@83 -- # : 0 00:10:47.072 13:53:26 -- setup/hugepages.sh@84 -- # : 0 00:10:47.072 13:53:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:47.072 13:53:26 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:10:47.072 13:53:26 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:10:47.072 13:53:26 -- setup/hugepages.sh@153 -- # setup output 00:10:47.072 13:53:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:47.072 13:53:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:47.643 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:47.643 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:47.643 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:47.643 13:53:27 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:10:47.643 13:53:27 -- setup/hugepages.sh@89 -- # local node 00:10:47.643 13:53:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:10:47.643 13:53:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:10:47.643 13:53:27 -- setup/hugepages.sh@92 -- # local surp 00:10:47.643 13:53:27 -- setup/hugepages.sh@93 -- # local resv 00:10:47.643 13:53:27 -- setup/hugepages.sh@94 -- # local anon 00:10:47.643 13:53:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:47.643 13:53:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:47.643 13:53:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:47.643 13:53:27 -- setup/common.sh@18 -- # local node= 00:10:47.643 13:53:27 -- setup/common.sh@19 -- # local var val 00:10:47.643 13:53:27 -- setup/common.sh@20 -- # local mem_f mem 00:10:47.643 13:53:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.643 13:53:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:47.643 13:53:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:47.643 13:53:27 -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.643 13:53:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.643 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.643 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.643 13:53:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7366580 kB' 'MemAvailable: 9483288 kB' 'Buffers: 2436 kB' 'Cached: 2326684 kB' 'SwapCached: 0 kB' 'Active: 891740 kB' 'Inactive: 1560228 kB' 'Active(anon): 133312 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1280 kB' 'Writeback: 0 kB' 'AnonPages: 124432 kB' 'Mapped: 48956 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153408 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82980 kB' 'KernelStack: 6324 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:47.643 13:53:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.643 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.643 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.643 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.643 13:53:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.643 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.643 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.643 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.644 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.644 13:53:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:47.644 13:53:27 -- setup/common.sh@33 -- # echo 0 00:10:47.644 13:53:27 -- setup/common.sh@33 -- # return 0 00:10:47.644 13:53:27 -- setup/hugepages.sh@97 -- # anon=0 00:10:47.644 13:53:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:47.644 13:53:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:47.644 13:53:27 -- setup/common.sh@18 -- # local node= 00:10:47.644 13:53:27 -- setup/common.sh@19 -- # local var val 00:10:47.644 13:53:27 -- setup/common.sh@20 -- # local mem_f mem 00:10:47.644 13:53:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.644 13:53:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:47.644 13:53:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:47.644 13:53:27 -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.645 13:53:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.645 13:53:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7366796 kB' 'MemAvailable: 9483504 kB' 'Buffers: 2436 kB' 'Cached: 2326684 kB' 'SwapCached: 0 kB' 'Active: 891380 kB' 'Inactive: 1560228 kB' 'Active(anon): 132952 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1280 kB' 'Writeback: 0 kB' 'AnonPages: 124104 kB' 'Mapped: 49072 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153408 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82980 kB' 'KernelStack: 6352 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.645 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.645 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.646 13:53:27 -- setup/common.sh@33 -- # echo 0 00:10:47.646 13:53:27 -- setup/common.sh@33 -- # return 0 00:10:47.646 13:53:27 -- setup/hugepages.sh@99 -- # surp=0 00:10:47.646 13:53:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:47.646 13:53:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:47.646 13:53:27 -- setup/common.sh@18 -- # local node= 00:10:47.646 13:53:27 -- setup/common.sh@19 -- # local var val 00:10:47.646 13:53:27 -- setup/common.sh@20 -- # local mem_f mem 00:10:47.646 13:53:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.646 13:53:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:47.646 13:53:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:47.646 13:53:27 -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.646 13:53:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7366796 kB' 'MemAvailable: 9483508 kB' 'Buffers: 2436 kB' 'Cached: 2326688 kB' 'SwapCached: 0 kB' 'Active: 891524 kB' 'Inactive: 1560232 kB' 'Active(anon): 133096 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1280 kB' 'Writeback: 0 kB' 'AnonPages: 124020 kB' 'Mapped: 48860 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153408 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82980 kB' 'KernelStack: 6368 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.646 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.646 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.647 13:53:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:47.647 13:53:27 -- setup/common.sh@33 -- # echo 0 00:10:47.647 13:53:27 -- setup/common.sh@33 -- # return 0 00:10:47.647 13:53:27 -- setup/hugepages.sh@100 -- # resv=0 00:10:47.647 nr_hugepages=1024 00:10:47.647 13:53:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:47.647 resv_hugepages=0 00:10:47.647 13:53:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:47.647 13:53:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:47.647 surplus_hugepages=0 00:10:47.647 anon_hugepages=0 00:10:47.647 13:53:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:47.647 13:53:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:47.647 13:53:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:47.647 13:53:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:47.647 13:53:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:47.647 13:53:27 -- setup/common.sh@18 -- # local node= 00:10:47.647 13:53:27 -- setup/common.sh@19 -- # local var val 00:10:47.647 13:53:27 -- setup/common.sh@20 -- # local mem_f mem 00:10:47.647 13:53:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.647 13:53:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:47.647 13:53:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:47.647 13:53:27 -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.647 13:53:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.647 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7366796 kB' 'MemAvailable: 9483508 kB' 'Buffers: 2436 kB' 'Cached: 2326688 kB' 'SwapCached: 0 kB' 'Active: 891524 kB' 'Inactive: 1560232 kB' 'Active(anon): 133096 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1280 kB' 'Writeback: 0 kB' 'AnonPages: 124280 kB' 'Mapped: 48860 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153408 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82980 kB' 'KernelStack: 6368 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 356616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.648 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.648 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:47.649 13:53:27 -- setup/common.sh@33 -- # echo 1024 00:10:47.649 13:53:27 -- setup/common.sh@33 -- # return 0 00:10:47.649 13:53:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:47.649 13:53:27 -- setup/hugepages.sh@112 -- # get_nodes 00:10:47.649 13:53:27 -- setup/hugepages.sh@27 -- # local node 00:10:47.649 13:53:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:47.649 13:53:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:47.649 13:53:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:47.649 13:53:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:47.649 13:53:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:47.649 13:53:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:47.649 13:53:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:47.649 13:53:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:47.649 13:53:27 -- setup/common.sh@18 -- # local node=0 00:10:47.649 13:53:27 -- setup/common.sh@19 -- # local var val 00:10:47.649 13:53:27 -- setup/common.sh@20 -- # local mem_f mem 00:10:47.649 13:53:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:47.649 13:53:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:47.649 13:53:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:47.649 13:53:27 -- setup/common.sh@28 -- # mapfile -t mem 00:10:47.649 13:53:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:47.649 13:53:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7366796 kB' 'MemUsed: 4875172 kB' 'SwapCached: 0 kB' 'Active: 891784 kB' 'Inactive: 1560232 kB' 'Active(anon): 133356 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1280 kB' 'Writeback: 0 kB' 'FilePages: 2329124 kB' 'Mapped: 48860 kB' 'AnonPages: 124280 kB' 'Shmem: 10464 kB' 'KernelStack: 6368 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70428 kB' 'Slab: 153408 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.649 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.649 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.650 13:53:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.650 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.650 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.650 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.650 13:53:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.650 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.650 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.650 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.650 13:53:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.650 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.650 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.650 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.650 13:53:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.650 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.650 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.650 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.650 13:53:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.650 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.650 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.650 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.650 13:53:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.650 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.650 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.650 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.650 13:53:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.650 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.650 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.650 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # continue 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # IFS=': ' 00:10:47.909 13:53:27 -- setup/common.sh@31 -- # read -r var val _ 00:10:47.909 13:53:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:47.909 13:53:27 -- setup/common.sh@33 -- # echo 0 00:10:47.909 13:53:27 -- setup/common.sh@33 -- # return 0 00:10:47.909 13:53:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:47.909 13:53:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:47.909 13:53:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:47.909 13:53:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:47.909 13:53:27 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:47.909 node0=1024 expecting 1024 00:10:47.909 13:53:27 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:47.909 00:10:47.909 real 0m0.672s 00:10:47.909 user 0m0.291s 00:10:47.909 sys 0m0.427s 00:10:47.909 13:53:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:47.909 13:53:27 -- common/autotest_common.sh@10 -- # set +x 00:10:47.909 ************************************ 00:10:47.909 END TEST even_2G_alloc 00:10:47.909 ************************************ 00:10:47.909 13:53:27 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:10:47.909 13:53:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:47.909 13:53:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.909 13:53:27 -- common/autotest_common.sh@10 -- # set +x 00:10:47.909 ************************************ 00:10:47.909 START TEST odd_alloc 00:10:47.909 ************************************ 00:10:47.909 13:53:27 -- common/autotest_common.sh@1111 -- # odd_alloc 00:10:47.909 13:53:27 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:10:47.909 13:53:27 -- setup/hugepages.sh@49 -- # local size=2098176 00:10:47.909 13:53:27 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:10:47.909 13:53:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:47.909 13:53:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:10:47.909 13:53:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:10:47.909 13:53:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:47.909 13:53:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:10:47.909 13:53:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:10:47.909 13:53:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:47.909 13:53:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:47.909 13:53:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:47.909 13:53:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:47.909 13:53:27 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:10:47.909 13:53:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:47.909 13:53:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:10:47.909 13:53:27 -- setup/hugepages.sh@83 -- # : 0 00:10:47.909 13:53:27 -- setup/hugepages.sh@84 -- # : 0 00:10:47.909 13:53:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:47.909 13:53:27 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:10:47.909 13:53:27 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:10:47.909 13:53:27 -- setup/hugepages.sh@160 -- # setup output 00:10:47.909 13:53:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:47.909 13:53:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:48.479 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:48.479 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:48.479 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:48.479 13:53:28 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:10:48.479 13:53:28 -- setup/hugepages.sh@89 -- # local node 00:10:48.479 13:53:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:10:48.479 13:53:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:10:48.479 13:53:28 -- setup/hugepages.sh@92 -- # local surp 00:10:48.479 13:53:28 -- setup/hugepages.sh@93 -- # local resv 00:10:48.479 13:53:28 -- setup/hugepages.sh@94 -- # local anon 00:10:48.479 13:53:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:48.479 13:53:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:48.479 13:53:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:48.479 13:53:28 -- setup/common.sh@18 -- # local node= 00:10:48.479 13:53:28 -- setup/common.sh@19 -- # local var val 00:10:48.479 13:53:28 -- setup/common.sh@20 -- # local mem_f mem 00:10:48.479 13:53:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:48.479 13:53:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:48.479 13:53:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:48.479 13:53:28 -- setup/common.sh@28 -- # mapfile -t mem 00:10:48.479 13:53:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:48.479 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7363616 kB' 'MemAvailable: 9480332 kB' 'Buffers: 2436 kB' 'Cached: 2326692 kB' 'SwapCached: 0 kB' 'Active: 891824 kB' 'Inactive: 1560236 kB' 'Active(anon): 133396 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1432 kB' 'Writeback: 0 kB' 'AnonPages: 124504 kB' 'Mapped: 48980 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153424 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82996 kB' 'KernelStack: 6384 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 356616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.480 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.480 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:48.481 13:53:28 -- setup/common.sh@33 -- # echo 0 00:10:48.481 13:53:28 -- setup/common.sh@33 -- # return 0 00:10:48.481 13:53:28 -- setup/hugepages.sh@97 -- # anon=0 00:10:48.481 13:53:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:48.481 13:53:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:48.481 13:53:28 -- setup/common.sh@18 -- # local node= 00:10:48.481 13:53:28 -- setup/common.sh@19 -- # local var val 00:10:48.481 13:53:28 -- setup/common.sh@20 -- # local mem_f mem 00:10:48.481 13:53:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:48.481 13:53:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:48.481 13:53:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:48.481 13:53:28 -- setup/common.sh@28 -- # mapfile -t mem 00:10:48.481 13:53:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:48.481 13:53:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7363616 kB' 'MemAvailable: 9480332 kB' 'Buffers: 2436 kB' 'Cached: 2326692 kB' 'SwapCached: 0 kB' 'Active: 891632 kB' 'Inactive: 1560236 kB' 'Active(anon): 133204 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1432 kB' 'Writeback: 0 kB' 'AnonPages: 124320 kB' 'Mapped: 48872 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153420 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82992 kB' 'KernelStack: 6368 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 356616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.481 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.481 13:53:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.482 13:53:28 -- setup/common.sh@33 -- # echo 0 00:10:48.482 13:53:28 -- setup/common.sh@33 -- # return 0 00:10:48.482 13:53:28 -- setup/hugepages.sh@99 -- # surp=0 00:10:48.482 13:53:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:48.482 13:53:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:48.482 13:53:28 -- setup/common.sh@18 -- # local node= 00:10:48.482 13:53:28 -- setup/common.sh@19 -- # local var val 00:10:48.482 13:53:28 -- setup/common.sh@20 -- # local mem_f mem 00:10:48.482 13:53:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:48.482 13:53:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:48.482 13:53:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:48.482 13:53:28 -- setup/common.sh@28 -- # mapfile -t mem 00:10:48.482 13:53:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7363616 kB' 'MemAvailable: 9480332 kB' 'Buffers: 2436 kB' 'Cached: 2326692 kB' 'SwapCached: 0 kB' 'Active: 891728 kB' 'Inactive: 1560236 kB' 'Active(anon): 133300 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1432 kB' 'Writeback: 0 kB' 'AnonPages: 124404 kB' 'Mapped: 48872 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153420 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82992 kB' 'KernelStack: 6352 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 356616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.482 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.482 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.483 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.483 13:53:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:48.483 13:53:28 -- setup/common.sh@33 -- # echo 0 00:10:48.483 13:53:28 -- setup/common.sh@33 -- # return 0 00:10:48.483 13:53:28 -- setup/hugepages.sh@100 -- # resv=0 00:10:48.483 nr_hugepages=1025 00:10:48.483 13:53:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:10:48.758 resv_hugepages=0 00:10:48.758 13:53:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:48.758 surplus_hugepages=0 00:10:48.758 13:53:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:48.758 anon_hugepages=0 00:10:48.759 13:53:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:48.759 13:53:28 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:10:48.759 13:53:28 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:10:48.759 13:53:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:48.759 13:53:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:48.759 13:53:28 -- setup/common.sh@18 -- # local node= 00:10:48.759 13:53:28 -- setup/common.sh@19 -- # local var val 00:10:48.759 13:53:28 -- setup/common.sh@20 -- # local mem_f mem 00:10:48.759 13:53:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:48.759 13:53:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:48.759 13:53:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:48.759 13:53:28 -- setup/common.sh@28 -- # mapfile -t mem 00:10:48.759 13:53:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7363616 kB' 'MemAvailable: 9480332 kB' 'Buffers: 2436 kB' 'Cached: 2326692 kB' 'SwapCached: 0 kB' 'Active: 891636 kB' 'Inactive: 1560236 kB' 'Active(anon): 133208 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1432 kB' 'Writeback: 0 kB' 'AnonPages: 124320 kB' 'Mapped: 48872 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153420 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82992 kB' 'KernelStack: 6368 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 356616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.759 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.759 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:48.760 13:53:28 -- setup/common.sh@33 -- # echo 1025 00:10:48.760 13:53:28 -- setup/common.sh@33 -- # return 0 00:10:48.760 13:53:28 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:10:48.760 13:53:28 -- setup/hugepages.sh@112 -- # get_nodes 00:10:48.760 13:53:28 -- setup/hugepages.sh@27 -- # local node 00:10:48.760 13:53:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:48.760 13:53:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:10:48.760 13:53:28 -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:48.760 13:53:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:48.760 13:53:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:48.760 13:53:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:48.760 13:53:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:48.760 13:53:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:48.760 13:53:28 -- setup/common.sh@18 -- # local node=0 00:10:48.760 13:53:28 -- setup/common.sh@19 -- # local var val 00:10:48.760 13:53:28 -- setup/common.sh@20 -- # local mem_f mem 00:10:48.760 13:53:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:48.760 13:53:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:48.760 13:53:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:48.760 13:53:28 -- setup/common.sh@28 -- # mapfile -t mem 00:10:48.760 13:53:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7363616 kB' 'MemUsed: 4878352 kB' 'SwapCached: 0 kB' 'Active: 891568 kB' 'Inactive: 1560236 kB' 'Active(anon): 133140 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560236 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1432 kB' 'Writeback: 0 kB' 'FilePages: 2329128 kB' 'Mapped: 48872 kB' 'AnonPages: 124244 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70428 kB' 'Slab: 153420 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.760 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.760 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # continue 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:48.761 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:48.761 13:53:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:48.761 13:53:28 -- setup/common.sh@33 -- # echo 0 00:10:48.761 13:53:28 -- setup/common.sh@33 -- # return 0 00:10:48.761 13:53:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:48.761 13:53:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:48.761 13:53:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:48.761 13:53:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:48.761 node0=1025 expecting 1025 00:10:48.761 13:53:28 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:10:48.761 13:53:28 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:10:48.761 00:10:48.761 real 0m0.722s 00:10:48.761 user 0m0.329s 00:10:48.761 sys 0m0.440s 00:10:48.761 13:53:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:48.761 13:53:28 -- common/autotest_common.sh@10 -- # set +x 00:10:48.761 ************************************ 00:10:48.761 END TEST odd_alloc 00:10:48.761 ************************************ 00:10:48.761 13:53:28 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:10:48.761 13:53:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:48.761 13:53:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:48.761 13:53:28 -- common/autotest_common.sh@10 -- # set +x 00:10:48.761 ************************************ 00:10:48.761 START TEST custom_alloc 00:10:48.761 ************************************ 00:10:48.761 13:53:28 -- common/autotest_common.sh@1111 -- # custom_alloc 00:10:48.761 13:53:28 -- setup/hugepages.sh@167 -- # local IFS=, 00:10:48.761 13:53:28 -- setup/hugepages.sh@169 -- # local node 00:10:48.761 13:53:28 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:10:48.761 13:53:28 -- setup/hugepages.sh@170 -- # local nodes_hp 00:10:48.761 13:53:28 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:10:48.761 13:53:28 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:10:48.761 13:53:28 -- setup/hugepages.sh@49 -- # local size=1048576 00:10:48.761 13:53:28 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:10:48.761 13:53:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:48.761 13:53:28 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:10:48.761 13:53:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:10:48.761 13:53:28 -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:48.761 13:53:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:10:48.761 13:53:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:10:48.761 13:53:28 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:48.761 13:53:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:48.761 13:53:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:48.761 13:53:28 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:48.761 13:53:28 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:10:48.761 13:53:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:48.761 13:53:28 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:10:48.761 13:53:28 -- setup/hugepages.sh@83 -- # : 0 00:10:48.761 13:53:28 -- setup/hugepages.sh@84 -- # : 0 00:10:48.761 13:53:28 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:10:48.761 13:53:28 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:10:48.761 13:53:28 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:10:48.761 13:53:28 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:10:48.761 13:53:28 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:10:48.761 13:53:28 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:10:48.761 13:53:28 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:10:48.761 13:53:28 -- setup/hugepages.sh@62 -- # user_nodes=() 00:10:48.761 13:53:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:10:48.761 13:53:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:10:48.761 13:53:28 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:48.761 13:53:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:48.761 13:53:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:48.761 13:53:28 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:10:48.761 13:53:28 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:10:48.761 13:53:28 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:10:48.761 13:53:28 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:10:48.761 13:53:28 -- setup/hugepages.sh@78 -- # return 0 00:10:48.761 13:53:28 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:10:48.761 13:53:28 -- setup/hugepages.sh@187 -- # setup output 00:10:48.761 13:53:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:48.761 13:53:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:49.334 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:49.334 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:49.334 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:49.334 13:53:28 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:10:49.334 13:53:28 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:10:49.334 13:53:28 -- setup/hugepages.sh@89 -- # local node 00:10:49.334 13:53:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:10:49.334 13:53:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:10:49.334 13:53:28 -- setup/hugepages.sh@92 -- # local surp 00:10:49.334 13:53:28 -- setup/hugepages.sh@93 -- # local resv 00:10:49.334 13:53:28 -- setup/hugepages.sh@94 -- # local anon 00:10:49.334 13:53:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:49.334 13:53:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:49.334 13:53:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:49.334 13:53:28 -- setup/common.sh@18 -- # local node= 00:10:49.334 13:53:28 -- setup/common.sh@19 -- # local var val 00:10:49.334 13:53:28 -- setup/common.sh@20 -- # local mem_f mem 00:10:49.334 13:53:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:49.334 13:53:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:49.334 13:53:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:49.334 13:53:28 -- setup/common.sh@28 -- # mapfile -t mem 00:10:49.334 13:53:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8416956 kB' 'MemAvailable: 10533676 kB' 'Buffers: 2436 kB' 'Cached: 2326696 kB' 'SwapCached: 0 kB' 'Active: 891796 kB' 'Inactive: 1560240 kB' 'Active(anon): 133368 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 688 kB' 'Writeback: 0 kB' 'AnonPages: 124532 kB' 'Mapped: 49124 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153572 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 83144 kB' 'KernelStack: 6340 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.334 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.334 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:49.335 13:53:28 -- setup/common.sh@33 -- # echo 0 00:10:49.335 13:53:28 -- setup/common.sh@33 -- # return 0 00:10:49.335 13:53:28 -- setup/hugepages.sh@97 -- # anon=0 00:10:49.335 13:53:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:49.335 13:53:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:49.335 13:53:28 -- setup/common.sh@18 -- # local node= 00:10:49.335 13:53:28 -- setup/common.sh@19 -- # local var val 00:10:49.335 13:53:28 -- setup/common.sh@20 -- # local mem_f mem 00:10:49.335 13:53:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:49.335 13:53:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:49.335 13:53:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:49.335 13:53:28 -- setup/common.sh@28 -- # mapfile -t mem 00:10:49.335 13:53:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8418136 kB' 'MemAvailable: 10534856 kB' 'Buffers: 2436 kB' 'Cached: 2326696 kB' 'SwapCached: 0 kB' 'Active: 891716 kB' 'Inactive: 1560240 kB' 'Active(anon): 133288 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 688 kB' 'Writeback: 0 kB' 'AnonPages: 124448 kB' 'Mapped: 48876 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153568 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 83140 kB' 'KernelStack: 6384 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.335 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.335 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.336 13:53:28 -- setup/common.sh@33 -- # echo 0 00:10:49.336 13:53:28 -- setup/common.sh@33 -- # return 0 00:10:49.336 13:53:28 -- setup/hugepages.sh@99 -- # surp=0 00:10:49.336 13:53:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:49.336 13:53:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:49.336 13:53:28 -- setup/common.sh@18 -- # local node= 00:10:49.336 13:53:28 -- setup/common.sh@19 -- # local var val 00:10:49.336 13:53:28 -- setup/common.sh@20 -- # local mem_f mem 00:10:49.336 13:53:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:49.336 13:53:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:49.336 13:53:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:49.336 13:53:28 -- setup/common.sh@28 -- # mapfile -t mem 00:10:49.336 13:53:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.336 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.336 13:53:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8418136 kB' 'MemAvailable: 10534856 kB' 'Buffers: 2436 kB' 'Cached: 2326696 kB' 'SwapCached: 0 kB' 'Active: 891388 kB' 'Inactive: 1560240 kB' 'Active(anon): 132960 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 688 kB' 'Writeback: 0 kB' 'AnonPages: 124112 kB' 'Mapped: 48876 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153564 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 83136 kB' 'KernelStack: 6368 kB' 'PageTables: 4376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.336 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.337 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.337 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.338 13:53:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.338 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.338 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.338 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.338 13:53:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.338 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.338 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.338 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.338 13:53:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.338 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.338 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.338 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.338 13:53:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.338 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.338 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.338 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.338 13:53:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.338 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.338 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.338 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.338 13:53:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.338 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.338 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.338 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.338 13:53:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.338 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.338 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.338 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.338 13:53:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.338 13:53:28 -- setup/common.sh@32 -- # continue 00:10:49.338 13:53:28 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.338 13:53:28 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.338 13:53:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:49.338 13:53:28 -- setup/common.sh@33 -- # echo 0 00:10:49.338 13:53:28 -- setup/common.sh@33 -- # return 0 00:10:49.338 13:53:28 -- setup/hugepages.sh@100 -- # resv=0 00:10:49.338 13:53:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:10:49.338 nr_hugepages=512 00:10:49.338 resv_hugepages=0 00:10:49.338 13:53:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:49.338 surplus_hugepages=0 00:10:49.338 13:53:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:49.338 anon_hugepages=0 00:10:49.338 13:53:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:49.338 13:53:28 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:10:49.338 13:53:28 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:10:49.338 13:53:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:49.338 13:53:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:49.338 13:53:29 -- setup/common.sh@18 -- # local node= 00:10:49.338 13:53:29 -- setup/common.sh@19 -- # local var val 00:10:49.338 13:53:29 -- setup/common.sh@20 -- # local mem_f mem 00:10:49.338 13:53:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:49.338 13:53:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:49.338 13:53:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:49.338 13:53:29 -- setup/common.sh@28 -- # mapfile -t mem 00:10:49.338 13:53:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:49.599 13:53:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8418136 kB' 'MemAvailable: 10534856 kB' 'Buffers: 2436 kB' 'Cached: 2326696 kB' 'SwapCached: 0 kB' 'Active: 891680 kB' 'Inactive: 1560240 kB' 'Active(anon): 133252 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 688 kB' 'Writeback: 0 kB' 'AnonPages: 124412 kB' 'Mapped: 48876 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153564 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 83136 kB' 'KernelStack: 6384 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 356756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:49.599 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.599 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.600 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.600 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:49.601 13:53:29 -- setup/common.sh@33 -- # echo 512 00:10:49.601 13:53:29 -- setup/common.sh@33 -- # return 0 00:10:49.601 13:53:29 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:10:49.601 13:53:29 -- setup/hugepages.sh@112 -- # get_nodes 00:10:49.601 13:53:29 -- setup/hugepages.sh@27 -- # local node 00:10:49.601 13:53:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:49.601 13:53:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:10:49.601 13:53:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:49.601 13:53:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:49.601 13:53:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:49.601 13:53:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:49.601 13:53:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:49.601 13:53:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:49.601 13:53:29 -- setup/common.sh@18 -- # local node=0 00:10:49.601 13:53:29 -- setup/common.sh@19 -- # local var val 00:10:49.601 13:53:29 -- setup/common.sh@20 -- # local mem_f mem 00:10:49.601 13:53:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:49.601 13:53:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:49.601 13:53:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:49.601 13:53:29 -- setup/common.sh@28 -- # mapfile -t mem 00:10:49.601 13:53:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8418136 kB' 'MemUsed: 3823832 kB' 'SwapCached: 0 kB' 'Active: 891584 kB' 'Inactive: 1560240 kB' 'Active(anon): 133156 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560240 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 688 kB' 'Writeback: 0 kB' 'FilePages: 2329132 kB' 'Mapped: 48876 kB' 'AnonPages: 124272 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70428 kB' 'Slab: 153564 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 83136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.601 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.601 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # continue 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:49.602 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:49.602 13:53:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:49.602 13:53:29 -- setup/common.sh@33 -- # echo 0 00:10:49.602 13:53:29 -- setup/common.sh@33 -- # return 0 00:10:49.602 13:53:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:49.602 13:53:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:49.602 13:53:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:49.602 13:53:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:49.602 13:53:29 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:10:49.602 node0=512 expecting 512 00:10:49.602 13:53:29 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:10:49.602 00:10:49.602 real 0m0.716s 00:10:49.602 user 0m0.348s 00:10:49.602 sys 0m0.416s 00:10:49.602 13:53:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:49.602 13:53:29 -- common/autotest_common.sh@10 -- # set +x 00:10:49.602 ************************************ 00:10:49.602 END TEST custom_alloc 00:10:49.602 ************************************ 00:10:49.602 13:53:29 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:10:49.602 13:53:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:49.602 13:53:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:49.602 13:53:29 -- common/autotest_common.sh@10 -- # set +x 00:10:49.602 ************************************ 00:10:49.602 START TEST no_shrink_alloc 00:10:49.602 ************************************ 00:10:49.602 13:53:29 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:10:49.602 13:53:29 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:10:49.602 13:53:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:10:49.602 13:53:29 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:10:49.602 13:53:29 -- setup/hugepages.sh@51 -- # shift 00:10:49.602 13:53:29 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:10:49.602 13:53:29 -- setup/hugepages.sh@52 -- # local node_ids 00:10:49.602 13:53:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:10:49.602 13:53:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:10:49.602 13:53:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:10:49.602 13:53:29 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:10:49.602 13:53:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:10:49.602 13:53:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:10:49.602 13:53:29 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:10:49.602 13:53:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:10:49.602 13:53:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:10:49.602 13:53:29 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:10:49.602 13:53:29 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:10:49.602 13:53:29 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:10:49.602 13:53:29 -- setup/hugepages.sh@73 -- # return 0 00:10:49.602 13:53:29 -- setup/hugepages.sh@198 -- # setup output 00:10:49.602 13:53:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:49.602 13:53:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:50.173 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:50.173 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:50.173 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:50.173 13:53:29 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:10:50.173 13:53:29 -- setup/hugepages.sh@89 -- # local node 00:10:50.173 13:53:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:10:50.173 13:53:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:10:50.173 13:53:29 -- setup/hugepages.sh@92 -- # local surp 00:10:50.173 13:53:29 -- setup/hugepages.sh@93 -- # local resv 00:10:50.173 13:53:29 -- setup/hugepages.sh@94 -- # local anon 00:10:50.173 13:53:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:50.173 13:53:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:50.173 13:53:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:50.173 13:53:29 -- setup/common.sh@18 -- # local node= 00:10:50.173 13:53:29 -- setup/common.sh@19 -- # local var val 00:10:50.173 13:53:29 -- setup/common.sh@20 -- # local mem_f mem 00:10:50.173 13:53:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:50.173 13:53:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:50.173 13:53:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:50.173 13:53:29 -- setup/common.sh@28 -- # mapfile -t mem 00:10:50.173 13:53:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.173 13:53:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7371852 kB' 'MemAvailable: 9488576 kB' 'Buffers: 2436 kB' 'Cached: 2326700 kB' 'SwapCached: 0 kB' 'Active: 887216 kB' 'Inactive: 1560244 kB' 'Active(anon): 128788 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 872 kB' 'Writeback: 0 kB' 'AnonPages: 119924 kB' 'Mapped: 48160 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153164 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82736 kB' 'KernelStack: 6256 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.173 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.173 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:50.174 13:53:29 -- setup/common.sh@33 -- # echo 0 00:10:50.174 13:53:29 -- setup/common.sh@33 -- # return 0 00:10:50.174 13:53:29 -- setup/hugepages.sh@97 -- # anon=0 00:10:50.174 13:53:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:50.174 13:53:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:50.174 13:53:29 -- setup/common.sh@18 -- # local node= 00:10:50.174 13:53:29 -- setup/common.sh@19 -- # local var val 00:10:50.174 13:53:29 -- setup/common.sh@20 -- # local mem_f mem 00:10:50.174 13:53:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:50.174 13:53:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:50.174 13:53:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:50.174 13:53:29 -- setup/common.sh@28 -- # mapfile -t mem 00:10:50.174 13:53:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7371600 kB' 'MemAvailable: 9488324 kB' 'Buffers: 2436 kB' 'Cached: 2326700 kB' 'SwapCached: 0 kB' 'Active: 886984 kB' 'Inactive: 1560244 kB' 'Active(anon): 128556 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 872 kB' 'Writeback: 0 kB' 'AnonPages: 119740 kB' 'Mapped: 48044 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153164 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82736 kB' 'KernelStack: 6240 kB' 'PageTables: 3684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.174 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.174 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.175 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.175 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.176 13:53:29 -- setup/common.sh@33 -- # echo 0 00:10:50.176 13:53:29 -- setup/common.sh@33 -- # return 0 00:10:50.176 13:53:29 -- setup/hugepages.sh@99 -- # surp=0 00:10:50.176 13:53:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:50.176 13:53:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:50.176 13:53:29 -- setup/common.sh@18 -- # local node= 00:10:50.176 13:53:29 -- setup/common.sh@19 -- # local var val 00:10:50.176 13:53:29 -- setup/common.sh@20 -- # local mem_f mem 00:10:50.176 13:53:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:50.176 13:53:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:50.176 13:53:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:50.176 13:53:29 -- setup/common.sh@28 -- # mapfile -t mem 00:10:50.176 13:53:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.176 13:53:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7371600 kB' 'MemAvailable: 9488324 kB' 'Buffers: 2436 kB' 'Cached: 2326700 kB' 'SwapCached: 0 kB' 'Active: 886696 kB' 'Inactive: 1560244 kB' 'Active(anon): 128268 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 872 kB' 'Writeback: 0 kB' 'AnonPages: 119444 kB' 'Mapped: 48044 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153164 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82736 kB' 'KernelStack: 6224 kB' 'PageTables: 3636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.176 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.176 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.438 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.438 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:50.439 13:53:29 -- setup/common.sh@33 -- # echo 0 00:10:50.439 13:53:29 -- setup/common.sh@33 -- # return 0 00:10:50.439 13:53:29 -- setup/hugepages.sh@100 -- # resv=0 00:10:50.439 13:53:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:50.439 nr_hugepages=1024 00:10:50.439 resv_hugepages=0 00:10:50.439 13:53:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:50.439 surplus_hugepages=0 00:10:50.439 13:53:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:50.439 anon_hugepages=0 00:10:50.439 13:53:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:50.439 13:53:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:50.439 13:53:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:50.439 13:53:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:50.439 13:53:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:50.439 13:53:29 -- setup/common.sh@18 -- # local node= 00:10:50.439 13:53:29 -- setup/common.sh@19 -- # local var val 00:10:50.439 13:53:29 -- setup/common.sh@20 -- # local mem_f mem 00:10:50.439 13:53:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:50.439 13:53:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:50.439 13:53:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:50.439 13:53:29 -- setup/common.sh@28 -- # mapfile -t mem 00:10:50.439 13:53:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.439 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.439 13:53:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7371600 kB' 'MemAvailable: 9488324 kB' 'Buffers: 2436 kB' 'Cached: 2326700 kB' 'SwapCached: 0 kB' 'Active: 886896 kB' 'Inactive: 1560244 kB' 'Active(anon): 128468 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 872 kB' 'Writeback: 0 kB' 'AnonPages: 119604 kB' 'Mapped: 48044 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153164 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82736 kB' 'KernelStack: 6192 kB' 'PageTables: 3540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.440 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.440 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:50.441 13:53:29 -- setup/common.sh@33 -- # echo 1024 00:10:50.441 13:53:29 -- setup/common.sh@33 -- # return 0 00:10:50.441 13:53:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:50.441 13:53:29 -- setup/hugepages.sh@112 -- # get_nodes 00:10:50.441 13:53:29 -- setup/hugepages.sh@27 -- # local node 00:10:50.441 13:53:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:50.441 13:53:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:50.441 13:53:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:50.441 13:53:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:50.441 13:53:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:50.441 13:53:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:50.441 13:53:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:50.441 13:53:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:50.441 13:53:29 -- setup/common.sh@18 -- # local node=0 00:10:50.441 13:53:29 -- setup/common.sh@19 -- # local var val 00:10:50.441 13:53:29 -- setup/common.sh@20 -- # local mem_f mem 00:10:50.441 13:53:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:50.441 13:53:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:50.441 13:53:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:50.441 13:53:29 -- setup/common.sh@28 -- # mapfile -t mem 00:10:50.441 13:53:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:50.441 13:53:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7371980 kB' 'MemUsed: 4869988 kB' 'SwapCached: 0 kB' 'Active: 886712 kB' 'Inactive: 1560244 kB' 'Active(anon): 128284 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 872 kB' 'Writeback: 0 kB' 'FilePages: 2329136 kB' 'Mapped: 48044 kB' 'AnonPages: 119676 kB' 'Shmem: 10464 kB' 'KernelStack: 6224 kB' 'PageTables: 3636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70428 kB' 'Slab: 153164 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.441 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.441 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # continue 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # IFS=': ' 00:10:50.442 13:53:29 -- setup/common.sh@31 -- # read -r var val _ 00:10:50.442 13:53:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:50.442 13:53:29 -- setup/common.sh@33 -- # echo 0 00:10:50.442 13:53:29 -- setup/common.sh@33 -- # return 0 00:10:50.442 13:53:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:50.442 13:53:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:50.442 13:53:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:50.442 13:53:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:50.442 13:53:29 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:50.442 node0=1024 expecting 1024 00:10:50.442 13:53:29 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:50.442 13:53:29 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:10:50.442 13:53:29 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:10:50.442 13:53:29 -- setup/hugepages.sh@202 -- # setup output 00:10:50.442 13:53:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:50.442 13:53:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:51.016 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:51.016 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:51.016 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:51.016 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:10:51.016 13:53:30 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:10:51.016 13:53:30 -- setup/hugepages.sh@89 -- # local node 00:10:51.016 13:53:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:10:51.016 13:53:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:10:51.016 13:53:30 -- setup/hugepages.sh@92 -- # local surp 00:10:51.016 13:53:30 -- setup/hugepages.sh@93 -- # local resv 00:10:51.016 13:53:30 -- setup/hugepages.sh@94 -- # local anon 00:10:51.016 13:53:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:10:51.016 13:53:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:10:51.016 13:53:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:10:51.016 13:53:30 -- setup/common.sh@18 -- # local node= 00:10:51.016 13:53:30 -- setup/common.sh@19 -- # local var val 00:10:51.016 13:53:30 -- setup/common.sh@20 -- # local mem_f mem 00:10:51.016 13:53:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:51.016 13:53:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:51.016 13:53:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:51.016 13:53:30 -- setup/common.sh@28 -- # mapfile -t mem 00:10:51.016 13:53:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7369344 kB' 'MemAvailable: 9486068 kB' 'Buffers: 2436 kB' 'Cached: 2326700 kB' 'SwapCached: 0 kB' 'Active: 886832 kB' 'Inactive: 1560244 kB' 'Active(anon): 128404 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 880 kB' 'Writeback: 0 kB' 'AnonPages: 119512 kB' 'Mapped: 48248 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153152 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82724 kB' 'KernelStack: 6212 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.016 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.016 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:10:51.017 13:53:30 -- setup/common.sh@33 -- # echo 0 00:10:51.017 13:53:30 -- setup/common.sh@33 -- # return 0 00:10:51.017 13:53:30 -- setup/hugepages.sh@97 -- # anon=0 00:10:51.017 13:53:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:10:51.017 13:53:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:51.017 13:53:30 -- setup/common.sh@18 -- # local node= 00:10:51.017 13:53:30 -- setup/common.sh@19 -- # local var val 00:10:51.017 13:53:30 -- setup/common.sh@20 -- # local mem_f mem 00:10:51.017 13:53:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:51.017 13:53:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:51.017 13:53:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:51.017 13:53:30 -- setup/common.sh@28 -- # mapfile -t mem 00:10:51.017 13:53:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7369344 kB' 'MemAvailable: 9486068 kB' 'Buffers: 2436 kB' 'Cached: 2326700 kB' 'SwapCached: 0 kB' 'Active: 887028 kB' 'Inactive: 1560244 kB' 'Active(anon): 128600 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 880 kB' 'Writeback: 0 kB' 'AnonPages: 119744 kB' 'Mapped: 48248 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153152 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82724 kB' 'KernelStack: 6196 kB' 'PageTables: 3780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.017 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.017 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.018 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.018 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.019 13:53:30 -- setup/common.sh@33 -- # echo 0 00:10:51.019 13:53:30 -- setup/common.sh@33 -- # return 0 00:10:51.019 13:53:30 -- setup/hugepages.sh@99 -- # surp=0 00:10:51.019 13:53:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:10:51.019 13:53:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:10:51.019 13:53:30 -- setup/common.sh@18 -- # local node= 00:10:51.019 13:53:30 -- setup/common.sh@19 -- # local var val 00:10:51.019 13:53:30 -- setup/common.sh@20 -- # local mem_f mem 00:10:51.019 13:53:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:51.019 13:53:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:51.019 13:53:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:51.019 13:53:30 -- setup/common.sh@28 -- # mapfile -t mem 00:10:51.019 13:53:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7369344 kB' 'MemAvailable: 9486068 kB' 'Buffers: 2436 kB' 'Cached: 2326700 kB' 'SwapCached: 0 kB' 'Active: 886900 kB' 'Inactive: 1560244 kB' 'Active(anon): 128472 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 880 kB' 'Writeback: 0 kB' 'AnonPages: 119664 kB' 'Mapped: 48152 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153152 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82724 kB' 'KernelStack: 6256 kB' 'PageTables: 3876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.019 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.019 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.020 13:53:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:10:51.020 13:53:30 -- setup/common.sh@33 -- # echo 0 00:10:51.020 13:53:30 -- setup/common.sh@33 -- # return 0 00:10:51.020 nr_hugepages=1024 00:10:51.020 13:53:30 -- setup/hugepages.sh@100 -- # resv=0 00:10:51.020 13:53:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:10:51.020 resv_hugepages=0 00:10:51.020 surplus_hugepages=0 00:10:51.020 anon_hugepages=0 00:10:51.020 13:53:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:10:51.020 13:53:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:10:51.020 13:53:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:10:51.020 13:53:30 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:51.020 13:53:30 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:10:51.020 13:53:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:10:51.020 13:53:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:10:51.020 13:53:30 -- setup/common.sh@18 -- # local node= 00:10:51.020 13:53:30 -- setup/common.sh@19 -- # local var val 00:10:51.020 13:53:30 -- setup/common.sh@20 -- # local mem_f mem 00:10:51.020 13:53:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:51.020 13:53:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:10:51.020 13:53:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:10:51.020 13:53:30 -- setup/common.sh@28 -- # mapfile -t mem 00:10:51.020 13:53:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.020 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7369344 kB' 'MemAvailable: 9486068 kB' 'Buffers: 2436 kB' 'Cached: 2326700 kB' 'SwapCached: 0 kB' 'Active: 887164 kB' 'Inactive: 1560244 kB' 'Active(anon): 128736 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 880 kB' 'Writeback: 0 kB' 'AnonPages: 119488 kB' 'Mapped: 48152 kB' 'Shmem: 10464 kB' 'KReclaimable: 70428 kB' 'Slab: 153152 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82724 kB' 'KernelStack: 6304 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 339368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 173932 kB' 'DirectMap2M: 6117376 kB' 'DirectMap1G: 8388608 kB' 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.021 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.021 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:10:51.022 13:53:30 -- setup/common.sh@33 -- # echo 1024 00:10:51.022 13:53:30 -- setup/common.sh@33 -- # return 0 00:10:51.022 13:53:30 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:10:51.022 13:53:30 -- setup/hugepages.sh@112 -- # get_nodes 00:10:51.022 13:53:30 -- setup/hugepages.sh@27 -- # local node 00:10:51.022 13:53:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:10:51.022 13:53:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:10:51.022 13:53:30 -- setup/hugepages.sh@32 -- # no_nodes=1 00:10:51.022 13:53:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:10:51.022 13:53:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:10:51.022 13:53:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:10:51.022 13:53:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:10:51.022 13:53:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:10:51.022 13:53:30 -- setup/common.sh@18 -- # local node=0 00:10:51.022 13:53:30 -- setup/common.sh@19 -- # local var val 00:10:51.022 13:53:30 -- setup/common.sh@20 -- # local mem_f mem 00:10:51.022 13:53:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:10:51.022 13:53:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:10:51.022 13:53:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:10:51.022 13:53:30 -- setup/common.sh@28 -- # mapfile -t mem 00:10:51.022 13:53:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7369344 kB' 'MemUsed: 4872624 kB' 'SwapCached: 0 kB' 'Active: 886804 kB' 'Inactive: 1560244 kB' 'Active(anon): 128376 kB' 'Inactive(anon): 0 kB' 'Active(file): 758428 kB' 'Inactive(file): 1560244 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 880 kB' 'Writeback: 0 kB' 'FilePages: 2329136 kB' 'Mapped: 48152 kB' 'AnonPages: 119548 kB' 'Shmem: 10464 kB' 'KernelStack: 6240 kB' 'PageTables: 3828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70428 kB' 'Slab: 153152 kB' 'SReclaimable: 70428 kB' 'SUnreclaim: 82724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.022 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.022 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # continue 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # IFS=': ' 00:10:51.023 13:53:30 -- setup/common.sh@31 -- # read -r var val _ 00:10:51.023 13:53:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:10:51.023 13:53:30 -- setup/common.sh@33 -- # echo 0 00:10:51.023 13:53:30 -- setup/common.sh@33 -- # return 0 00:10:51.023 13:53:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:10:51.023 13:53:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:10:51.023 13:53:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:10:51.023 13:53:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:10:51.023 node0=1024 expecting 1024 00:10:51.023 13:53:30 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:10:51.023 13:53:30 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:10:51.023 00:10:51.023 real 0m1.475s 00:10:51.023 user 0m0.679s 00:10:51.023 sys 0m0.865s 00:10:51.023 13:53:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:51.023 ************************************ 00:10:51.023 END TEST no_shrink_alloc 00:10:51.023 ************************************ 00:10:51.023 13:53:30 -- common/autotest_common.sh@10 -- # set +x 00:10:51.282 13:53:30 -- setup/hugepages.sh@217 -- # clear_hp 00:10:51.282 13:53:30 -- setup/hugepages.sh@37 -- # local node hp 00:10:51.282 13:53:30 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:10:51.282 13:53:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:51.282 13:53:30 -- setup/hugepages.sh@41 -- # echo 0 00:10:51.282 13:53:30 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:10:51.282 13:53:30 -- setup/hugepages.sh@41 -- # echo 0 00:10:51.282 13:53:30 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:10:51.282 13:53:30 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:10:51.282 00:10:51.282 real 0m6.490s 00:10:51.282 user 0m2.835s 00:10:51.282 sys 0m3.775s 00:10:51.282 ************************************ 00:10:51.283 END TEST hugepages 00:10:51.283 ************************************ 00:10:51.283 13:53:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:51.283 13:53:30 -- common/autotest_common.sh@10 -- # set +x 00:10:51.283 13:53:30 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:10:51.283 13:53:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:51.283 13:53:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:51.283 13:53:30 -- common/autotest_common.sh@10 -- # set +x 00:10:51.283 ************************************ 00:10:51.283 START TEST driver 00:10:51.283 ************************************ 00:10:51.283 13:53:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:10:51.541 * Looking for test storage... 00:10:51.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:51.541 13:53:31 -- setup/driver.sh@68 -- # setup reset 00:10:51.541 13:53:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:51.541 13:53:31 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:52.480 13:53:31 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:10:52.480 13:53:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:52.480 13:53:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:52.480 13:53:31 -- common/autotest_common.sh@10 -- # set +x 00:10:52.480 ************************************ 00:10:52.480 START TEST guess_driver 00:10:52.480 ************************************ 00:10:52.480 13:53:31 -- common/autotest_common.sh@1111 -- # guess_driver 00:10:52.480 13:53:31 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:10:52.480 13:53:31 -- setup/driver.sh@47 -- # local fail=0 00:10:52.480 13:53:31 -- setup/driver.sh@49 -- # pick_driver 00:10:52.480 13:53:31 -- setup/driver.sh@36 -- # vfio 00:10:52.480 13:53:31 -- setup/driver.sh@21 -- # local iommu_grups 00:10:52.480 13:53:31 -- setup/driver.sh@22 -- # local unsafe_vfio 00:10:52.480 13:53:31 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:10:52.480 13:53:31 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:10:52.480 13:53:31 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:10:52.480 13:53:31 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:10:52.480 13:53:31 -- setup/driver.sh@32 -- # return 1 00:10:52.480 13:53:31 -- setup/driver.sh@38 -- # uio 00:10:52.480 13:53:31 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:10:52.480 13:53:31 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:10:52.480 13:53:31 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:10:52.480 13:53:31 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:10:52.480 13:53:31 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:10:52.480 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:10:52.480 13:53:31 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:10:52.480 Looking for driver=uio_pci_generic 00:10:52.480 13:53:31 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:10:52.480 13:53:31 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:10:52.480 13:53:31 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:10:52.480 13:53:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:52.480 13:53:31 -- setup/driver.sh@45 -- # setup output config 00:10:52.480 13:53:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:52.480 13:53:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:53.417 13:53:32 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:10:53.417 13:53:32 -- setup/driver.sh@58 -- # continue 00:10:53.417 13:53:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:53.418 13:53:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:53.418 13:53:32 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:10:53.418 13:53:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:53.418 13:53:32 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:10:53.418 13:53:32 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:10:53.418 13:53:32 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:10:53.418 13:53:33 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:10:53.418 13:53:33 -- setup/driver.sh@65 -- # setup reset 00:10:53.418 13:53:33 -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:53.418 13:53:33 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:54.395 00:10:54.395 real 0m1.885s 00:10:54.395 user 0m0.687s 00:10:54.395 sys 0m1.254s 00:10:54.395 ************************************ 00:10:54.395 END TEST guess_driver 00:10:54.395 ************************************ 00:10:54.395 13:53:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:54.395 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:10:54.395 ************************************ 00:10:54.395 END TEST driver 00:10:54.395 ************************************ 00:10:54.395 00:10:54.395 real 0m2.955s 00:10:54.395 user 0m1.056s 00:10:54.395 sys 0m2.034s 00:10:54.395 13:53:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:54.395 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:10:54.395 13:53:33 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:10:54.395 13:53:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:54.395 13:53:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:54.395 13:53:33 -- common/autotest_common.sh@10 -- # set +x 00:10:54.395 ************************************ 00:10:54.395 START TEST devices 00:10:54.395 ************************************ 00:10:54.395 13:53:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:10:54.654 * Looking for test storage... 00:10:54.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:10:54.654 13:53:34 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:10:54.654 13:53:34 -- setup/devices.sh@192 -- # setup reset 00:10:54.654 13:53:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:10:54.654 13:53:34 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:55.590 13:53:35 -- setup/devices.sh@194 -- # get_zoned_devs 00:10:55.590 13:53:35 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:10:55.590 13:53:35 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:10:55.590 13:53:35 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:10:55.590 13:53:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:55.590 13:53:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:10:55.590 13:53:35 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:10:55.590 13:53:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:55.590 13:53:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:55.590 13:53:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:55.590 13:53:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:10:55.590 13:53:35 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:10:55.590 13:53:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:10:55.590 13:53:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:55.590 13:53:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:55.590 13:53:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:10:55.590 13:53:35 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:10:55.590 13:53:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:10:55.590 13:53:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:55.590 13:53:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:55.590 13:53:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:10:55.590 13:53:35 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:10:55.590 13:53:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:55.590 13:53:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:55.590 13:53:35 -- setup/devices.sh@196 -- # blocks=() 00:10:55.590 13:53:35 -- setup/devices.sh@196 -- # declare -a blocks 00:10:55.590 13:53:35 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:10:55.590 13:53:35 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:10:55.590 13:53:35 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:10:55.590 13:53:35 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:10:55.590 13:53:35 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:10:55.590 13:53:35 -- setup/devices.sh@201 -- # ctrl=nvme0 00:10:55.590 13:53:35 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:10:55.590 13:53:35 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:10:55.590 13:53:35 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:10:55.590 13:53:35 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:10:55.590 13:53:35 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:10:55.590 No valid GPT data, bailing 00:10:55.590 13:53:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:10:55.590 13:53:35 -- scripts/common.sh@391 -- # pt= 00:10:55.590 13:53:35 -- scripts/common.sh@392 -- # return 1 00:10:55.590 13:53:35 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:10:55.590 13:53:35 -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:55.590 13:53:35 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:55.590 13:53:35 -- setup/common.sh@80 -- # echo 4294967296 00:10:55.590 13:53:35 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:10:55.591 13:53:35 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:10:55.591 13:53:35 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:10:55.591 13:53:35 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:10:55.591 13:53:35 -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:10:55.591 13:53:35 -- setup/devices.sh@201 -- # ctrl=nvme0 00:10:55.591 13:53:35 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:10:55.591 13:53:35 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:10:55.591 13:53:35 -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:10:55.591 13:53:35 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:10:55.591 13:53:35 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:10:55.591 No valid GPT data, bailing 00:10:55.591 13:53:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:10:55.591 13:53:35 -- scripts/common.sh@391 -- # pt= 00:10:55.591 13:53:35 -- scripts/common.sh@392 -- # return 1 00:10:55.591 13:53:35 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:10:55.591 13:53:35 -- setup/common.sh@76 -- # local dev=nvme0n2 00:10:55.591 13:53:35 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:10:55.591 13:53:35 -- setup/common.sh@80 -- # echo 4294967296 00:10:55.591 13:53:35 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:10:55.591 13:53:35 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:10:55.591 13:53:35 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:10:55.591 13:53:35 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:10:55.591 13:53:35 -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:10:55.591 13:53:35 -- setup/devices.sh@201 -- # ctrl=nvme0 00:10:55.591 13:53:35 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:10:55.591 13:53:35 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:10:55.591 13:53:35 -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:10:55.591 13:53:35 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:10:55.591 13:53:35 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:10:55.591 No valid GPT data, bailing 00:10:55.591 13:53:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:10:55.850 13:53:35 -- scripts/common.sh@391 -- # pt= 00:10:55.850 13:53:35 -- scripts/common.sh@392 -- # return 1 00:10:55.850 13:53:35 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:10:55.850 13:53:35 -- setup/common.sh@76 -- # local dev=nvme0n3 00:10:55.850 13:53:35 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:10:55.850 13:53:35 -- setup/common.sh@80 -- # echo 4294967296 00:10:55.850 13:53:35 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:10:55.850 13:53:35 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:10:55.850 13:53:35 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:10:55.850 13:53:35 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:10:55.850 13:53:35 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:10:55.850 13:53:35 -- setup/devices.sh@201 -- # ctrl=nvme1 00:10:55.850 13:53:35 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:10:55.850 13:53:35 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:10:55.850 13:53:35 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:10:55.850 13:53:35 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:10:55.850 13:53:35 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:10:55.850 No valid GPT data, bailing 00:10:55.850 13:53:35 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:10:55.850 13:53:35 -- scripts/common.sh@391 -- # pt= 00:10:55.850 13:53:35 -- scripts/common.sh@392 -- # return 1 00:10:55.850 13:53:35 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:10:55.850 13:53:35 -- setup/common.sh@76 -- # local dev=nvme1n1 00:10:55.850 13:53:35 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:10:55.850 13:53:35 -- setup/common.sh@80 -- # echo 5368709120 00:10:55.850 13:53:35 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:10:55.850 13:53:35 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:10:55.850 13:53:35 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:10:55.850 13:53:35 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:10:55.850 13:53:35 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:10:55.850 13:53:35 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:10:55.850 13:53:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:55.850 13:53:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:55.850 13:53:35 -- common/autotest_common.sh@10 -- # set +x 00:10:55.850 ************************************ 00:10:55.850 START TEST nvme_mount 00:10:55.850 ************************************ 00:10:55.850 13:53:35 -- common/autotest_common.sh@1111 -- # nvme_mount 00:10:55.850 13:53:35 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:10:55.850 13:53:35 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:10:55.850 13:53:35 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:55.850 13:53:35 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:55.850 13:53:35 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:10:55.850 13:53:35 -- setup/common.sh@39 -- # local disk=nvme0n1 00:10:55.850 13:53:35 -- setup/common.sh@40 -- # local part_no=1 00:10:55.850 13:53:35 -- setup/common.sh@41 -- # local size=1073741824 00:10:55.850 13:53:35 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:10:55.850 13:53:35 -- setup/common.sh@44 -- # parts=() 00:10:55.850 13:53:35 -- setup/common.sh@44 -- # local parts 00:10:55.850 13:53:35 -- setup/common.sh@46 -- # (( part = 1 )) 00:10:55.850 13:53:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:10:55.850 13:53:35 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:10:55.850 13:53:35 -- setup/common.sh@46 -- # (( part++ )) 00:10:55.850 13:53:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:10:55.850 13:53:35 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:10:55.850 13:53:35 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:10:55.850 13:53:35 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:10:57.230 Creating new GPT entries in memory. 00:10:57.230 GPT data structures destroyed! You may now partition the disk using fdisk or 00:10:57.230 other utilities. 00:10:57.230 13:53:36 -- setup/common.sh@57 -- # (( part = 1 )) 00:10:57.230 13:53:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:10:57.230 13:53:36 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:10:57.230 13:53:36 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:10:57.230 13:53:36 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:10:58.167 Creating new GPT entries in memory. 00:10:58.167 The operation has completed successfully. 00:10:58.167 13:53:37 -- setup/common.sh@57 -- # (( part++ )) 00:10:58.167 13:53:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:10:58.167 13:53:37 -- setup/common.sh@62 -- # wait 58303 00:10:58.167 13:53:37 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:58.167 13:53:37 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:10:58.167 13:53:37 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:58.167 13:53:37 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:10:58.167 13:53:37 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:10:58.426 13:53:37 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:58.426 13:53:37 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:58.426 13:53:37 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:10:58.426 13:53:37 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:10:58.426 13:53:37 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:58.426 13:53:37 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:58.426 13:53:37 -- setup/devices.sh@53 -- # local found=0 00:10:58.426 13:53:37 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:10:58.426 13:53:37 -- setup/devices.sh@56 -- # : 00:10:58.426 13:53:37 -- setup/devices.sh@59 -- # local pci status 00:10:58.426 13:53:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:58.426 13:53:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:10:58.426 13:53:37 -- setup/devices.sh@47 -- # setup output config 00:10:58.426 13:53:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:58.426 13:53:37 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:58.686 13:53:38 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:58.686 13:53:38 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:10:58.686 13:53:38 -- setup/devices.sh@63 -- # found=1 00:10:58.686 13:53:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:58.686 13:53:38 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:58.686 13:53:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:58.686 13:53:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:58.686 13:53:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:58.946 13:53:38 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:58.946 13:53:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:58.946 13:53:38 -- setup/devices.sh@66 -- # (( found == 1 )) 00:10:58.946 13:53:38 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:10:58.946 13:53:38 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:58.946 13:53:38 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:10:58.946 13:53:38 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:58.946 13:53:38 -- setup/devices.sh@110 -- # cleanup_nvme 00:10:58.946 13:53:38 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:58.946 13:53:38 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:58.946 13:53:38 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:10:58.946 13:53:38 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:10:58.946 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:10:58.946 13:53:38 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:10:58.946 13:53:38 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:10:59.206 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:59.206 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:10:59.206 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:59.206 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:10:59.206 13:53:38 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:10:59.206 13:53:38 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:10:59.206 13:53:38 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:59.206 13:53:38 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:10:59.206 13:53:38 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:10:59.206 13:53:38 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:59.466 13:53:38 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:59.466 13:53:38 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:10:59.466 13:53:38 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:10:59.466 13:53:38 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:59.466 13:53:38 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:59.466 13:53:38 -- setup/devices.sh@53 -- # local found=0 00:10:59.466 13:53:38 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:10:59.466 13:53:38 -- setup/devices.sh@56 -- # : 00:10:59.466 13:53:38 -- setup/devices.sh@59 -- # local pci status 00:10:59.466 13:53:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.466 13:53:38 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:10:59.466 13:53:38 -- setup/devices.sh@47 -- # setup output config 00:10:59.466 13:53:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:59.466 13:53:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:10:59.726 13:53:39 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:59.726 13:53:39 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:10:59.726 13:53:39 -- setup/devices.sh@63 -- # found=1 00:10:59.726 13:53:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.726 13:53:39 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:59.726 13:53:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.726 13:53:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:59.726 13:53:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.994 13:53:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:59.994 13:53:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.994 13:53:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:10:59.994 13:53:39 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:10:59.994 13:53:39 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:59.994 13:53:39 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:10:59.994 13:53:39 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:10:59.994 13:53:39 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:10:59.994 13:53:39 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:10:59.994 13:53:39 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:10:59.994 13:53:39 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:10:59.994 13:53:39 -- setup/devices.sh@50 -- # local mount_point= 00:10:59.994 13:53:39 -- setup/devices.sh@51 -- # local test_file= 00:10:59.994 13:53:39 -- setup/devices.sh@53 -- # local found=0 00:10:59.994 13:53:39 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:10:59.994 13:53:39 -- setup/devices.sh@59 -- # local pci status 00:10:59.994 13:53:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:10:59.994 13:53:39 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:10:59.994 13:53:39 -- setup/devices.sh@47 -- # setup output config 00:10:59.994 13:53:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:10:59.994 13:53:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:00.604 13:53:39 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:00.604 13:53:39 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:11:00.605 13:53:39 -- setup/devices.sh@63 -- # found=1 00:11:00.605 13:53:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:00.605 13:53:39 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:00.605 13:53:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:00.605 13:53:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:00.605 13:53:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:00.605 13:53:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:00.605 13:53:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:00.869 13:53:40 -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:00.869 13:53:40 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:11:00.869 13:53:40 -- setup/devices.sh@68 -- # return 0 00:11:00.869 13:53:40 -- setup/devices.sh@128 -- # cleanup_nvme 00:11:00.869 13:53:40 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:00.869 13:53:40 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:00.869 13:53:40 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:00.869 13:53:40 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:00.869 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:00.869 00:11:00.869 real 0m4.886s 00:11:00.869 user 0m0.865s 00:11:00.869 sys 0m1.518s 00:11:00.869 13:53:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:00.869 13:53:40 -- common/autotest_common.sh@10 -- # set +x 00:11:00.869 ************************************ 00:11:00.869 END TEST nvme_mount 00:11:00.869 ************************************ 00:11:00.869 13:53:40 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:11:00.869 13:53:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:00.869 13:53:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:00.869 13:53:40 -- common/autotest_common.sh@10 -- # set +x 00:11:00.869 ************************************ 00:11:00.869 START TEST dm_mount 00:11:00.869 ************************************ 00:11:00.869 13:53:40 -- common/autotest_common.sh@1111 -- # dm_mount 00:11:00.869 13:53:40 -- setup/devices.sh@144 -- # pv=nvme0n1 00:11:00.869 13:53:40 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:11:00.869 13:53:40 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:11:00.869 13:53:40 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:11:00.869 13:53:40 -- setup/common.sh@39 -- # local disk=nvme0n1 00:11:00.869 13:53:40 -- setup/common.sh@40 -- # local part_no=2 00:11:00.869 13:53:40 -- setup/common.sh@41 -- # local size=1073741824 00:11:00.869 13:53:40 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:11:00.869 13:53:40 -- setup/common.sh@44 -- # parts=() 00:11:00.869 13:53:40 -- setup/common.sh@44 -- # local parts 00:11:00.869 13:53:40 -- setup/common.sh@46 -- # (( part = 1 )) 00:11:00.869 13:53:40 -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:00.869 13:53:40 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:00.869 13:53:40 -- setup/common.sh@46 -- # (( part++ )) 00:11:00.869 13:53:40 -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:00.869 13:53:40 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:00.869 13:53:40 -- setup/common.sh@46 -- # (( part++ )) 00:11:00.869 13:53:40 -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:00.869 13:53:40 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:11:00.869 13:53:40 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:11:00.869 13:53:40 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:11:02.246 Creating new GPT entries in memory. 00:11:02.246 GPT data structures destroyed! You may now partition the disk using fdisk or 00:11:02.247 other utilities. 00:11:02.247 13:53:41 -- setup/common.sh@57 -- # (( part = 1 )) 00:11:02.247 13:53:41 -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:02.247 13:53:41 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:02.247 13:53:41 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:02.247 13:53:41 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:11:03.184 Creating new GPT entries in memory. 00:11:03.184 The operation has completed successfully. 00:11:03.184 13:53:42 -- setup/common.sh@57 -- # (( part++ )) 00:11:03.184 13:53:42 -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:03.184 13:53:42 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:03.184 13:53:42 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:03.184 13:53:42 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:11:04.140 The operation has completed successfully. 00:11:04.140 13:53:43 -- setup/common.sh@57 -- # (( part++ )) 00:11:04.140 13:53:43 -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:04.140 13:53:43 -- setup/common.sh@62 -- # wait 58752 00:11:04.140 13:53:43 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:11:04.140 13:53:43 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:04.140 13:53:43 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:04.140 13:53:43 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:11:04.140 13:53:43 -- setup/devices.sh@160 -- # for t in {1..5} 00:11:04.140 13:53:43 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:04.140 13:53:43 -- setup/devices.sh@161 -- # break 00:11:04.140 13:53:43 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:04.140 13:53:43 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:11:04.140 13:53:43 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:11:04.140 13:53:43 -- setup/devices.sh@166 -- # dm=dm-0 00:11:04.140 13:53:43 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:11:04.140 13:53:43 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:11:04.140 13:53:43 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:04.140 13:53:43 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:11:04.140 13:53:43 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:04.140 13:53:43 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:04.140 13:53:43 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:11:04.140 13:53:43 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:04.140 13:53:43 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:04.140 13:53:43 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:04.140 13:53:43 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:11:04.140 13:53:43 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:04.140 13:53:43 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:04.140 13:53:43 -- setup/devices.sh@53 -- # local found=0 00:11:04.140 13:53:43 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:11:04.140 13:53:43 -- setup/devices.sh@56 -- # : 00:11:04.140 13:53:43 -- setup/devices.sh@59 -- # local pci status 00:11:04.140 13:53:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.140 13:53:43 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:04.140 13:53:43 -- setup/devices.sh@47 -- # setup output config 00:11:04.140 13:53:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:04.140 13:53:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:04.398 13:53:44 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.398 13:53:44 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:11:04.398 13:53:44 -- setup/devices.sh@63 -- # found=1 00:11:04.398 13:53:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.398 13:53:44 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.398 13:53:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.657 13:53:44 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.657 13:53:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.915 13:53:44 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.915 13:53:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.915 13:53:44 -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:04.915 13:53:44 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:11:04.915 13:53:44 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:04.915 13:53:44 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:11:04.915 13:53:44 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:04.915 13:53:44 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:04.915 13:53:44 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:11:04.915 13:53:44 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:04.915 13:53:44 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:11:04.915 13:53:44 -- setup/devices.sh@50 -- # local mount_point= 00:11:04.915 13:53:44 -- setup/devices.sh@51 -- # local test_file= 00:11:04.915 13:53:44 -- setup/devices.sh@53 -- # local found=0 00:11:04.915 13:53:44 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:11:04.915 13:53:44 -- setup/devices.sh@59 -- # local pci status 00:11:04.915 13:53:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:04.915 13:53:44 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:04.915 13:53:44 -- setup/devices.sh@47 -- # setup output config 00:11:04.915 13:53:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:11:04.915 13:53:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:05.172 13:53:44 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:05.172 13:53:44 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:11:05.172 13:53:44 -- setup/devices.sh@63 -- # found=1 00:11:05.172 13:53:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:05.172 13:53:44 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:05.172 13:53:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:05.430 13:53:44 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:05.430 13:53:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:05.688 13:53:45 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:05.688 13:53:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:05.688 13:53:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:05.688 13:53:45 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:11:05.688 13:53:45 -- setup/devices.sh@68 -- # return 0 00:11:05.688 13:53:45 -- setup/devices.sh@187 -- # cleanup_dm 00:11:05.688 13:53:45 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:05.688 13:53:45 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:11:05.688 13:53:45 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:11:05.688 13:53:45 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:05.688 13:53:45 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:11:05.688 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:05.688 13:53:45 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:11:05.688 13:53:45 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:11:05.688 00:11:05.688 real 0m4.781s 00:11:05.688 user 0m0.636s 00:11:05.688 sys 0m1.095s 00:11:05.688 ************************************ 00:11:05.688 END TEST dm_mount 00:11:05.688 ************************************ 00:11:05.688 13:53:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:05.688 13:53:45 -- common/autotest_common.sh@10 -- # set +x 00:11:05.688 13:53:45 -- setup/devices.sh@1 -- # cleanup 00:11:05.688 13:53:45 -- setup/devices.sh@11 -- # cleanup_nvme 00:11:05.688 13:53:45 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:05.688 13:53:45 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:05.688 13:53:45 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:11:05.688 13:53:45 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:05.688 13:53:45 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:06.257 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:06.257 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:06.257 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:06.257 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:06.257 13:53:45 -- setup/devices.sh@12 -- # cleanup_dm 00:11:06.257 13:53:45 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:06.257 13:53:45 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:11:06.257 13:53:45 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:06.257 13:53:45 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:11:06.257 13:53:45 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:11:06.257 13:53:45 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:11:06.257 00:11:06.257 real 0m11.639s 00:11:06.257 user 0m2.251s 00:11:06.257 sys 0m3.524s 00:11:06.257 ************************************ 00:11:06.257 END TEST devices 00:11:06.257 ************************************ 00:11:06.257 13:53:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:06.257 13:53:45 -- common/autotest_common.sh@10 -- # set +x 00:11:06.257 00:11:06.257 real 0m27.665s 00:11:06.257 user 0m8.660s 00:11:06.257 sys 0m13.297s 00:11:06.257 13:53:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:06.257 13:53:45 -- common/autotest_common.sh@10 -- # set +x 00:11:06.257 ************************************ 00:11:06.257 END TEST setup.sh 00:11:06.257 ************************************ 00:11:06.257 13:53:45 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:07.193 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:07.193 Hugepages 00:11:07.193 node hugesize free / total 00:11:07.193 node0 1048576kB 0 / 0 00:11:07.193 node0 2048kB 2048 / 2048 00:11:07.193 00:11:07.193 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:07.193 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:11:07.193 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:11:07.453 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:11:07.453 13:53:46 -- spdk/autotest.sh@130 -- # uname -s 00:11:07.453 13:53:46 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:11:07.453 13:53:46 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:11:07.453 13:53:46 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:08.391 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:08.391 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:08.391 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:08.391 13:53:48 -- common/autotest_common.sh@1518 -- # sleep 1 00:11:09.801 13:53:49 -- common/autotest_common.sh@1519 -- # bdfs=() 00:11:09.801 13:53:49 -- common/autotest_common.sh@1519 -- # local bdfs 00:11:09.801 13:53:49 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:09.801 13:53:49 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:09.801 13:53:49 -- common/autotest_common.sh@1499 -- # bdfs=() 00:11:09.801 13:53:49 -- common/autotest_common.sh@1499 -- # local bdfs 00:11:09.801 13:53:49 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:09.801 13:53:49 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:09.801 13:53:49 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:11:09.801 13:53:49 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:11:09.801 13:53:49 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:11:09.801 13:53:49 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:10.060 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:10.060 Waiting for block devices as requested 00:11:10.319 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:10.319 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:10.319 13:53:49 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:11:10.319 13:53:49 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:11:10.319 13:53:49 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:11:10.319 13:53:49 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:11:10.319 13:53:49 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:10.319 13:53:49 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:11:10.319 13:53:49 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:10.319 13:53:49 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:11:10.319 13:53:49 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:11:10.319 13:53:49 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:11:10.319 13:53:49 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:11:10.319 13:53:49 -- common/autotest_common.sh@1531 -- # grep oacs 00:11:10.319 13:53:49 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:11:10.577 13:53:49 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:11:10.577 13:53:49 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:11:10.577 13:53:49 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:11:10.577 13:53:50 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:11:10.577 13:53:50 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:11:10.577 13:53:50 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:11:10.577 13:53:50 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:11:10.577 13:53:50 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:11:10.577 13:53:50 -- common/autotest_common.sh@1543 -- # continue 00:11:10.577 13:53:50 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:11:10.577 13:53:50 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:11:10.577 13:53:50 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:11:10.577 13:53:50 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:11:10.577 13:53:50 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:10.577 13:53:50 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:11:10.577 13:53:50 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:10.577 13:53:50 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:11:10.577 13:53:50 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:11:10.577 13:53:50 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:11:10.577 13:53:50 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:11:10.577 13:53:50 -- common/autotest_common.sh@1531 -- # grep oacs 00:11:10.577 13:53:50 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:11:10.578 13:53:50 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:11:10.578 13:53:50 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:11:10.578 13:53:50 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:11:10.578 13:53:50 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:11:10.578 13:53:50 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:11:10.578 13:53:50 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:11:10.578 13:53:50 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:11:10.578 13:53:50 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:11:10.578 13:53:50 -- common/autotest_common.sh@1543 -- # continue 00:11:10.578 13:53:50 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:11:10.578 13:53:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:10.578 13:53:50 -- common/autotest_common.sh@10 -- # set +x 00:11:10.578 13:53:50 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:11:10.578 13:53:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:10.578 13:53:50 -- common/autotest_common.sh@10 -- # set +x 00:11:10.578 13:53:50 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:11.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:11.511 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:11.511 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:11.511 13:53:51 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:11:11.511 13:53:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:11.511 13:53:51 -- common/autotest_common.sh@10 -- # set +x 00:11:11.511 13:53:51 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:11:11.511 13:53:51 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:11:11.511 13:53:51 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:11:11.511 13:53:51 -- common/autotest_common.sh@1563 -- # bdfs=() 00:11:11.511 13:53:51 -- common/autotest_common.sh@1563 -- # local bdfs 00:11:11.511 13:53:51 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:11:11.511 13:53:51 -- common/autotest_common.sh@1499 -- # bdfs=() 00:11:11.511 13:53:51 -- common/autotest_common.sh@1499 -- # local bdfs 00:11:11.511 13:53:51 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:11.770 13:53:51 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:11.770 13:53:51 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:11:11.770 13:53:51 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:11:11.770 13:53:51 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:11:11.770 13:53:51 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:11:11.770 13:53:51 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:11:11.770 13:53:51 -- common/autotest_common.sh@1566 -- # device=0x0010 00:11:11.770 13:53:51 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:11.770 13:53:51 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:11:11.770 13:53:51 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:11:11.770 13:53:51 -- common/autotest_common.sh@1566 -- # device=0x0010 00:11:11.770 13:53:51 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:11.770 13:53:51 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:11:11.770 13:53:51 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:11:11.770 13:53:51 -- common/autotest_common.sh@1579 -- # return 0 00:11:11.770 13:53:51 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:11:11.770 13:53:51 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:11:11.770 13:53:51 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:11:11.770 13:53:51 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:11:11.770 13:53:51 -- spdk/autotest.sh@162 -- # timing_enter lib 00:11:11.770 13:53:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:11.770 13:53:51 -- common/autotest_common.sh@10 -- # set +x 00:11:11.770 13:53:51 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:11.770 13:53:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:11.770 13:53:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:11.770 13:53:51 -- common/autotest_common.sh@10 -- # set +x 00:11:11.770 ************************************ 00:11:11.770 START TEST env 00:11:11.770 ************************************ 00:11:11.770 13:53:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:12.028 * Looking for test storage... 00:11:12.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:11:12.028 13:53:51 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:12.028 13:53:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:12.028 13:53:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:12.028 13:53:51 -- common/autotest_common.sh@10 -- # set +x 00:11:12.028 ************************************ 00:11:12.028 START TEST env_memory 00:11:12.028 ************************************ 00:11:12.028 13:53:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:12.028 00:11:12.028 00:11:12.028 CUnit - A unit testing framework for C - Version 2.1-3 00:11:12.028 http://cunit.sourceforge.net/ 00:11:12.028 00:11:12.028 00:11:12.028 Suite: memory 00:11:12.028 Test: alloc and free memory map ...[2024-04-26 13:53:51.681886] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:11:12.287 passed 00:11:12.287 Test: mem map translation ...[2024-04-26 13:53:51.723017] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:11:12.287 [2024-04-26 13:53:51.723078] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:11:12.287 [2024-04-26 13:53:51.723146] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:11:12.287 [2024-04-26 13:53:51.723174] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:11:12.287 passed 00:11:12.287 Test: mem map registration ...[2024-04-26 13:53:51.787093] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:11:12.287 [2024-04-26 13:53:51.787149] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:11:12.287 passed 00:11:12.287 Test: mem map adjacent registrations ...passed 00:11:12.287 00:11:12.287 Run Summary: Type Total Ran Passed Failed Inactive 00:11:12.287 suites 1 1 n/a 0 0 00:11:12.287 tests 4 4 4 0 0 00:11:12.287 asserts 152 152 152 0 n/a 00:11:12.287 00:11:12.287 Elapsed time = 0.229 seconds 00:11:12.287 00:11:12.287 real 0m0.275s 00:11:12.287 user 0m0.246s 00:11:12.287 sys 0m0.028s 00:11:12.287 13:53:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:12.287 ************************************ 00:11:12.287 END TEST env_memory 00:11:12.287 ************************************ 00:11:12.287 13:53:51 -- common/autotest_common.sh@10 -- # set +x 00:11:12.287 13:53:51 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:12.287 13:53:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:12.287 13:53:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:12.287 13:53:51 -- common/autotest_common.sh@10 -- # set +x 00:11:12.544 ************************************ 00:11:12.544 START TEST env_vtophys 00:11:12.544 ************************************ 00:11:12.544 13:53:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:12.545 EAL: lib.eal log level changed from notice to debug 00:11:12.545 EAL: Detected lcore 0 as core 0 on socket 0 00:11:12.545 EAL: Detected lcore 1 as core 0 on socket 0 00:11:12.545 EAL: Detected lcore 2 as core 0 on socket 0 00:11:12.545 EAL: Detected lcore 3 as core 0 on socket 0 00:11:12.545 EAL: Detected lcore 4 as core 0 on socket 0 00:11:12.545 EAL: Detected lcore 5 as core 0 on socket 0 00:11:12.545 EAL: Detected lcore 6 as core 0 on socket 0 00:11:12.545 EAL: Detected lcore 7 as core 0 on socket 0 00:11:12.545 EAL: Detected lcore 8 as core 0 on socket 0 00:11:12.545 EAL: Detected lcore 9 as core 0 on socket 0 00:11:12.545 EAL: Maximum logical cores by configuration: 128 00:11:12.545 EAL: Detected CPU lcores: 10 00:11:12.545 EAL: Detected NUMA nodes: 1 00:11:12.545 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:11:12.545 EAL: Detected shared linkage of DPDK 00:11:12.545 EAL: No shared files mode enabled, IPC will be disabled 00:11:12.545 EAL: Selected IOVA mode 'PA' 00:11:12.545 EAL: Probing VFIO support... 00:11:12.545 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:12.545 EAL: VFIO modules not loaded, skipping VFIO support... 00:11:12.545 EAL: Ask a virtual area of 0x2e000 bytes 00:11:12.545 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:11:12.545 EAL: Setting up physically contiguous memory... 00:11:12.545 EAL: Setting maximum number of open files to 524288 00:11:12.545 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:11:12.545 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:11:12.545 EAL: Ask a virtual area of 0x61000 bytes 00:11:12.545 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:11:12.545 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:12.545 EAL: Ask a virtual area of 0x400000000 bytes 00:11:12.545 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:11:12.545 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:11:12.545 EAL: Ask a virtual area of 0x61000 bytes 00:11:12.545 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:11:12.545 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:12.545 EAL: Ask a virtual area of 0x400000000 bytes 00:11:12.545 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:11:12.545 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:11:12.545 EAL: Ask a virtual area of 0x61000 bytes 00:11:12.545 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:11:12.545 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:12.545 EAL: Ask a virtual area of 0x400000000 bytes 00:11:12.545 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:11:12.545 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:11:12.545 EAL: Ask a virtual area of 0x61000 bytes 00:11:12.545 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:11:12.545 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:12.545 EAL: Ask a virtual area of 0x400000000 bytes 00:11:12.545 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:11:12.545 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:11:12.545 EAL: Hugepages will be freed exactly as allocated. 00:11:12.545 EAL: No shared files mode enabled, IPC is disabled 00:11:12.545 EAL: No shared files mode enabled, IPC is disabled 00:11:12.803 EAL: TSC frequency is ~2490000 KHz 00:11:12.803 EAL: Main lcore 0 is ready (tid=7fabe16cba40;cpuset=[0]) 00:11:12.803 EAL: Trying to obtain current memory policy. 00:11:12.803 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:12.803 EAL: Restoring previous memory policy: 0 00:11:12.803 EAL: request: mp_malloc_sync 00:11:12.803 EAL: No shared files mode enabled, IPC is disabled 00:11:12.803 EAL: Heap on socket 0 was expanded by 2MB 00:11:12.803 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:12.803 EAL: No PCI address specified using 'addr=' in: bus=pci 00:11:12.803 EAL: Mem event callback 'spdk:(nil)' registered 00:11:12.803 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:11:12.803 00:11:12.803 00:11:12.803 CUnit - A unit testing framework for C - Version 2.1-3 00:11:12.803 http://cunit.sourceforge.net/ 00:11:12.803 00:11:12.803 00:11:12.803 Suite: components_suite 00:11:13.063 Test: vtophys_malloc_test ...passed 00:11:13.063 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:11:13.063 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:13.063 EAL: Restoring previous memory policy: 4 00:11:13.063 EAL: Calling mem event callback 'spdk:(nil)' 00:11:13.063 EAL: request: mp_malloc_sync 00:11:13.063 EAL: No shared files mode enabled, IPC is disabled 00:11:13.063 EAL: Heap on socket 0 was expanded by 4MB 00:11:13.063 EAL: Calling mem event callback 'spdk:(nil)' 00:11:13.063 EAL: request: mp_malloc_sync 00:11:13.063 EAL: No shared files mode enabled, IPC is disabled 00:11:13.063 EAL: Heap on socket 0 was shrunk by 4MB 00:11:13.063 EAL: Trying to obtain current memory policy. 00:11:13.063 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:13.063 EAL: Restoring previous memory policy: 4 00:11:13.063 EAL: Calling mem event callback 'spdk:(nil)' 00:11:13.063 EAL: request: mp_malloc_sync 00:11:13.063 EAL: No shared files mode enabled, IPC is disabled 00:11:13.063 EAL: Heap on socket 0 was expanded by 6MB 00:11:13.322 EAL: Calling mem event callback 'spdk:(nil)' 00:11:13.322 EAL: request: mp_malloc_sync 00:11:13.322 EAL: No shared files mode enabled, IPC is disabled 00:11:13.322 EAL: Heap on socket 0 was shrunk by 6MB 00:11:13.322 EAL: Trying to obtain current memory policy. 00:11:13.322 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:13.322 EAL: Restoring previous memory policy: 4 00:11:13.322 EAL: Calling mem event callback 'spdk:(nil)' 00:11:13.322 EAL: request: mp_malloc_sync 00:11:13.322 EAL: No shared files mode enabled, IPC is disabled 00:11:13.322 EAL: Heap on socket 0 was expanded by 10MB 00:11:13.322 EAL: Calling mem event callback 'spdk:(nil)' 00:11:13.322 EAL: request: mp_malloc_sync 00:11:13.322 EAL: No shared files mode enabled, IPC is disabled 00:11:13.322 EAL: Heap on socket 0 was shrunk by 10MB 00:11:13.322 EAL: Trying to obtain current memory policy. 00:11:13.322 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:13.322 EAL: Restoring previous memory policy: 4 00:11:13.322 EAL: Calling mem event callback 'spdk:(nil)' 00:11:13.322 EAL: request: mp_malloc_sync 00:11:13.322 EAL: No shared files mode enabled, IPC is disabled 00:11:13.322 EAL: Heap on socket 0 was expanded by 18MB 00:11:13.322 EAL: Calling mem event callback 'spdk:(nil)' 00:11:13.322 EAL: request: mp_malloc_sync 00:11:13.322 EAL: No shared files mode enabled, IPC is disabled 00:11:13.322 EAL: Heap on socket 0 was shrunk by 18MB 00:11:13.322 EAL: Trying to obtain current memory policy. 00:11:13.322 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:13.322 EAL: Restoring previous memory policy: 4 00:11:13.322 EAL: Calling mem event callback 'spdk:(nil)' 00:11:13.322 EAL: request: mp_malloc_sync 00:11:13.322 EAL: No shared files mode enabled, IPC is disabled 00:11:13.322 EAL: Heap on socket 0 was expanded by 34MB 00:11:13.322 EAL: Calling mem event callback 'spdk:(nil)' 00:11:13.322 EAL: request: mp_malloc_sync 00:11:13.322 EAL: No shared files mode enabled, IPC is disabled 00:11:13.322 EAL: Heap on socket 0 was shrunk by 34MB 00:11:13.322 EAL: Trying to obtain current memory policy. 00:11:13.322 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:13.322 EAL: Restoring previous memory policy: 4 00:11:13.322 EAL: Calling mem event callback 'spdk:(nil)' 00:11:13.322 EAL: request: mp_malloc_sync 00:11:13.322 EAL: No shared files mode enabled, IPC is disabled 00:11:13.322 EAL: Heap on socket 0 was expanded by 66MB 00:11:13.581 EAL: Calling mem event callback 'spdk:(nil)' 00:11:13.581 EAL: request: mp_malloc_sync 00:11:13.581 EAL: No shared files mode enabled, IPC is disabled 00:11:13.581 EAL: Heap on socket 0 was shrunk by 66MB 00:11:13.581 EAL: Trying to obtain current memory policy. 00:11:13.581 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:13.581 EAL: Restoring previous memory policy: 4 00:11:13.581 EAL: Calling mem event callback 'spdk:(nil)' 00:11:13.581 EAL: request: mp_malloc_sync 00:11:13.581 EAL: No shared files mode enabled, IPC is disabled 00:11:13.581 EAL: Heap on socket 0 was expanded by 130MB 00:11:13.839 EAL: Calling mem event callback 'spdk:(nil)' 00:11:14.126 EAL: request: mp_malloc_sync 00:11:14.126 EAL: No shared files mode enabled, IPC is disabled 00:11:14.126 EAL: Heap on socket 0 was shrunk by 130MB 00:11:14.126 EAL: Trying to obtain current memory policy. 00:11:14.126 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:14.126 EAL: Restoring previous memory policy: 4 00:11:14.126 EAL: Calling mem event callback 'spdk:(nil)' 00:11:14.126 EAL: request: mp_malloc_sync 00:11:14.126 EAL: No shared files mode enabled, IPC is disabled 00:11:14.126 EAL: Heap on socket 0 was expanded by 258MB 00:11:14.693 EAL: Calling mem event callback 'spdk:(nil)' 00:11:14.693 EAL: request: mp_malloc_sync 00:11:14.693 EAL: No shared files mode enabled, IPC is disabled 00:11:14.693 EAL: Heap on socket 0 was shrunk by 258MB 00:11:15.259 EAL: Trying to obtain current memory policy. 00:11:15.259 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:15.259 EAL: Restoring previous memory policy: 4 00:11:15.259 EAL: Calling mem event callback 'spdk:(nil)' 00:11:15.259 EAL: request: mp_malloc_sync 00:11:15.259 EAL: No shared files mode enabled, IPC is disabled 00:11:15.259 EAL: Heap on socket 0 was expanded by 514MB 00:11:16.631 EAL: Calling mem event callback 'spdk:(nil)' 00:11:16.631 EAL: request: mp_malloc_sync 00:11:16.631 EAL: No shared files mode enabled, IPC is disabled 00:11:16.631 EAL: Heap on socket 0 was shrunk by 514MB 00:11:17.199 EAL: Trying to obtain current memory policy. 00:11:17.199 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:17.458 EAL: Restoring previous memory policy: 4 00:11:17.458 EAL: Calling mem event callback 'spdk:(nil)' 00:11:17.458 EAL: request: mp_malloc_sync 00:11:17.458 EAL: No shared files mode enabled, IPC is disabled 00:11:17.458 EAL: Heap on socket 0 was expanded by 1026MB 00:11:19.994 EAL: Calling mem event callback 'spdk:(nil)' 00:11:19.994 EAL: request: mp_malloc_sync 00:11:19.994 EAL: No shared files mode enabled, IPC is disabled 00:11:19.994 EAL: Heap on socket 0 was shrunk by 1026MB 00:11:21.374 passed 00:11:21.374 00:11:21.374 Run Summary: Type Total Ran Passed Failed Inactive 00:11:21.374 suites 1 1 n/a 0 0 00:11:21.374 tests 2 2 2 0 0 00:11:21.374 asserts 5383 5383 5383 0 n/a 00:11:21.374 00:11:21.374 Elapsed time = 8.581 seconds 00:11:21.374 EAL: Calling mem event callback 'spdk:(nil)' 00:11:21.374 EAL: request: mp_malloc_sync 00:11:21.374 EAL: No shared files mode enabled, IPC is disabled 00:11:21.374 EAL: Heap on socket 0 was shrunk by 2MB 00:11:21.374 EAL: No shared files mode enabled, IPC is disabled 00:11:21.374 EAL: No shared files mode enabled, IPC is disabled 00:11:21.374 EAL: No shared files mode enabled, IPC is disabled 00:11:21.374 00:11:21.374 real 0m8.908s 00:11:21.374 user 0m7.912s 00:11:21.374 sys 0m0.834s 00:11:21.374 13:54:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:21.374 13:54:00 -- common/autotest_common.sh@10 -- # set +x 00:11:21.374 ************************************ 00:11:21.374 END TEST env_vtophys 00:11:21.374 ************************************ 00:11:21.374 13:54:01 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:21.374 13:54:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:21.374 13:54:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:21.374 13:54:01 -- common/autotest_common.sh@10 -- # set +x 00:11:21.633 ************************************ 00:11:21.633 START TEST env_pci 00:11:21.633 ************************************ 00:11:21.633 13:54:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:21.633 00:11:21.633 00:11:21.633 CUnit - A unit testing framework for C - Version 2.1-3 00:11:21.633 http://cunit.sourceforge.net/ 00:11:21.633 00:11:21.633 00:11:21.633 Suite: pci 00:11:21.633 Test: pci_hook ...[2024-04-26 13:54:01.168247] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60066 has claimed it 00:11:21.633 passed 00:11:21.633 00:11:21.633 Run Summary: Type Total Ran Passed Failed Inactive 00:11:21.633 suites 1 1 n/a 0 0 00:11:21.633 tests 1 1 1 0 0 00:11:21.633 asserts 25 25 25 0 n/a 00:11:21.633 00:11:21.633 Elapsed time = 0.010 secondsEAL: Cannot find device (10000:00:01.0) 00:11:21.633 EAL: Failed to attach device on primary process 00:11:21.633 00:11:21.633 00:11:21.633 real 0m0.118s 00:11:21.633 user 0m0.043s 00:11:21.633 sys 0m0.072s 00:11:21.633 13:54:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:21.633 ************************************ 00:11:21.633 13:54:01 -- common/autotest_common.sh@10 -- # set +x 00:11:21.633 END TEST env_pci 00:11:21.633 ************************************ 00:11:21.633 13:54:01 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:11:21.633 13:54:01 -- env/env.sh@15 -- # uname 00:11:21.633 13:54:01 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:11:21.633 13:54:01 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:11:21.633 13:54:01 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:21.633 13:54:01 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:11:21.633 13:54:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:21.633 13:54:01 -- common/autotest_common.sh@10 -- # set +x 00:11:21.891 ************************************ 00:11:21.892 START TEST env_dpdk_post_init 00:11:21.892 ************************************ 00:11:21.892 13:54:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:21.892 EAL: Detected CPU lcores: 10 00:11:21.892 EAL: Detected NUMA nodes: 1 00:11:21.892 EAL: Detected shared linkage of DPDK 00:11:21.892 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:21.892 EAL: Selected IOVA mode 'PA' 00:11:22.149 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:22.149 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:11:22.149 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:11:22.149 Starting DPDK initialization... 00:11:22.149 Starting SPDK post initialization... 00:11:22.149 SPDK NVMe probe 00:11:22.149 Attaching to 0000:00:10.0 00:11:22.149 Attaching to 0000:00:11.0 00:11:22.150 Attached to 0000:00:10.0 00:11:22.150 Attached to 0000:00:11.0 00:11:22.150 Cleaning up... 00:11:22.150 00:11:22.150 real 0m0.287s 00:11:22.150 user 0m0.084s 00:11:22.150 sys 0m0.103s 00:11:22.150 13:54:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:22.150 13:54:01 -- common/autotest_common.sh@10 -- # set +x 00:11:22.150 ************************************ 00:11:22.150 END TEST env_dpdk_post_init 00:11:22.150 ************************************ 00:11:22.150 13:54:01 -- env/env.sh@26 -- # uname 00:11:22.150 13:54:01 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:11:22.150 13:54:01 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:22.150 13:54:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:22.150 13:54:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.150 13:54:01 -- common/autotest_common.sh@10 -- # set +x 00:11:22.407 ************************************ 00:11:22.407 START TEST env_mem_callbacks 00:11:22.407 ************************************ 00:11:22.407 13:54:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:22.407 EAL: Detected CPU lcores: 10 00:11:22.407 EAL: Detected NUMA nodes: 1 00:11:22.407 EAL: Detected shared linkage of DPDK 00:11:22.407 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:22.407 EAL: Selected IOVA mode 'PA' 00:11:22.407 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:22.407 00:11:22.407 00:11:22.407 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.407 http://cunit.sourceforge.net/ 00:11:22.407 00:11:22.407 00:11:22.407 Suite: memory 00:11:22.407 Test: test ... 00:11:22.407 register 0x200000200000 2097152 00:11:22.407 malloc 3145728 00:11:22.407 register 0x200000400000 4194304 00:11:22.407 buf 0x2000004fffc0 len 3145728 PASSED 00:11:22.407 malloc 64 00:11:22.407 buf 0x2000004ffec0 len 64 PASSED 00:11:22.407 malloc 4194304 00:11:22.407 register 0x200000800000 6291456 00:11:22.407 buf 0x2000009fffc0 len 4194304 PASSED 00:11:22.407 free 0x2000004fffc0 3145728 00:11:22.407 free 0x2000004ffec0 64 00:11:22.665 unregister 0x200000400000 4194304 PASSED 00:11:22.665 free 0x2000009fffc0 4194304 00:11:22.665 unregister 0x200000800000 6291456 PASSED 00:11:22.665 malloc 8388608 00:11:22.665 register 0x200000400000 10485760 00:11:22.665 buf 0x2000005fffc0 len 8388608 PASSED 00:11:22.665 free 0x2000005fffc0 8388608 00:11:22.665 unregister 0x200000400000 10485760 PASSED 00:11:22.665 passed 00:11:22.665 00:11:22.665 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.665 suites 1 1 n/a 0 0 00:11:22.665 tests 1 1 1 0 0 00:11:22.665 asserts 15 15 15 0 n/a 00:11:22.665 00:11:22.665 Elapsed time = 0.087 seconds 00:11:22.665 00:11:22.665 real 0m0.297s 00:11:22.665 user 0m0.117s 00:11:22.665 sys 0m0.078s 00:11:22.665 13:54:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:22.665 13:54:02 -- common/autotest_common.sh@10 -- # set +x 00:11:22.665 ************************************ 00:11:22.665 END TEST env_mem_callbacks 00:11:22.665 ************************************ 00:11:22.665 00:11:22.665 real 0m10.835s 00:11:22.665 user 0m8.734s 00:11:22.665 sys 0m1.642s 00:11:22.665 13:54:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:22.665 13:54:02 -- common/autotest_common.sh@10 -- # set +x 00:11:22.665 ************************************ 00:11:22.665 END TEST env 00:11:22.665 ************************************ 00:11:22.665 13:54:02 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:22.665 13:54:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:22.665 13:54:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.665 13:54:02 -- common/autotest_common.sh@10 -- # set +x 00:11:22.924 ************************************ 00:11:22.924 START TEST rpc 00:11:22.924 ************************************ 00:11:22.924 13:54:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:22.924 * Looking for test storage... 00:11:22.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:22.924 13:54:02 -- rpc/rpc.sh@65 -- # spdk_pid=60204 00:11:22.924 13:54:02 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:22.924 13:54:02 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:11:22.924 13:54:02 -- rpc/rpc.sh@67 -- # waitforlisten 60204 00:11:22.924 13:54:02 -- common/autotest_common.sh@817 -- # '[' -z 60204 ']' 00:11:22.924 13:54:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.924 13:54:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:22.924 13:54:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.924 13:54:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:22.924 13:54:02 -- common/autotest_common.sh@10 -- # set +x 00:11:23.181 [2024-04-26 13:54:02.613320] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:11:23.181 [2024-04-26 13:54:02.613443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60204 ] 00:11:23.181 [2024-04-26 13:54:02.786593] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.440 [2024-04-26 13:54:03.036436] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:11:23.440 [2024-04-26 13:54:03.036494] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60204' to capture a snapshot of events at runtime. 00:11:23.440 [2024-04-26 13:54:03.036507] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.440 [2024-04-26 13:54:03.036521] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.440 [2024-04-26 13:54:03.036532] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60204 for offline analysis/debug. 00:11:23.440 [2024-04-26 13:54:03.036565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.379 13:54:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:24.379 13:54:03 -- common/autotest_common.sh@850 -- # return 0 00:11:24.379 13:54:03 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:24.379 13:54:03 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:24.379 13:54:03 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:11:24.379 13:54:03 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:11:24.379 13:54:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:24.379 13:54:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:24.379 13:54:03 -- common/autotest_common.sh@10 -- # set +x 00:11:24.637 ************************************ 00:11:24.637 START TEST rpc_integrity 00:11:24.637 ************************************ 00:11:24.637 13:54:04 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:11:24.637 13:54:04 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.637 13:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:24.637 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:11:24.637 13:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:24.637 13:54:04 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:24.637 13:54:04 -- rpc/rpc.sh@13 -- # jq length 00:11:24.637 13:54:04 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:24.637 13:54:04 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:24.637 13:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:24.637 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:11:24.637 13:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:24.637 13:54:04 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:11:24.637 13:54:04 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:24.637 13:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:24.637 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:11:24.637 13:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:24.637 13:54:04 -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:24.637 { 00:11:24.637 "aliases": [ 00:11:24.637 "bddd50dd-465f-4d7f-8836-44d86b96c147" 00:11:24.637 ], 00:11:24.637 "assigned_rate_limits": { 00:11:24.637 "r_mbytes_per_sec": 0, 00:11:24.637 "rw_ios_per_sec": 0, 00:11:24.637 "rw_mbytes_per_sec": 0, 00:11:24.637 "w_mbytes_per_sec": 0 00:11:24.637 }, 00:11:24.637 "block_size": 512, 00:11:24.637 "claimed": false, 00:11:24.637 "driver_specific": {}, 00:11:24.637 "memory_domains": [ 00:11:24.637 { 00:11:24.637 "dma_device_id": "system", 00:11:24.637 "dma_device_type": 1 00:11:24.637 }, 00:11:24.637 { 00:11:24.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.637 "dma_device_type": 2 00:11:24.637 } 00:11:24.637 ], 00:11:24.637 "name": "Malloc0", 00:11:24.637 "num_blocks": 16384, 00:11:24.637 "product_name": "Malloc disk", 00:11:24.637 "supported_io_types": { 00:11:24.637 "abort": true, 00:11:24.638 "compare": false, 00:11:24.638 "compare_and_write": false, 00:11:24.638 "flush": true, 00:11:24.638 "nvme_admin": false, 00:11:24.638 "nvme_io": false, 00:11:24.638 "read": true, 00:11:24.638 "reset": true, 00:11:24.638 "unmap": true, 00:11:24.638 "write": true, 00:11:24.638 "write_zeroes": true 00:11:24.638 }, 00:11:24.638 "uuid": "bddd50dd-465f-4d7f-8836-44d86b96c147", 00:11:24.638 "zoned": false 00:11:24.638 } 00:11:24.638 ]' 00:11:24.638 13:54:04 -- rpc/rpc.sh@17 -- # jq length 00:11:24.638 13:54:04 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:24.638 13:54:04 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:11:24.638 13:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:24.638 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:11:24.638 [2024-04-26 13:54:04.258346] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:11:24.638 [2024-04-26 13:54:04.258419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.638 [2024-04-26 13:54:04.258444] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:24.638 [2024-04-26 13:54:04.258459] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.638 [2024-04-26 13:54:04.260956] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.638 [2024-04-26 13:54:04.261002] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:24.638 Passthru0 00:11:24.638 13:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:24.638 13:54:04 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:24.638 13:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:24.638 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:11:24.638 13:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:24.638 13:54:04 -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:24.638 { 00:11:24.638 "aliases": [ 00:11:24.638 "bddd50dd-465f-4d7f-8836-44d86b96c147" 00:11:24.638 ], 00:11:24.638 "assigned_rate_limits": { 00:11:24.638 "r_mbytes_per_sec": 0, 00:11:24.638 "rw_ios_per_sec": 0, 00:11:24.638 "rw_mbytes_per_sec": 0, 00:11:24.638 "w_mbytes_per_sec": 0 00:11:24.638 }, 00:11:24.638 "block_size": 512, 00:11:24.638 "claim_type": "exclusive_write", 00:11:24.638 "claimed": true, 00:11:24.638 "driver_specific": {}, 00:11:24.638 "memory_domains": [ 00:11:24.638 { 00:11:24.638 "dma_device_id": "system", 00:11:24.638 "dma_device_type": 1 00:11:24.638 }, 00:11:24.638 { 00:11:24.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.638 "dma_device_type": 2 00:11:24.638 } 00:11:24.638 ], 00:11:24.638 "name": "Malloc0", 00:11:24.638 "num_blocks": 16384, 00:11:24.638 "product_name": "Malloc disk", 00:11:24.638 "supported_io_types": { 00:11:24.638 "abort": true, 00:11:24.638 "compare": false, 00:11:24.638 "compare_and_write": false, 00:11:24.638 "flush": true, 00:11:24.638 "nvme_admin": false, 00:11:24.638 "nvme_io": false, 00:11:24.638 "read": true, 00:11:24.638 "reset": true, 00:11:24.638 "unmap": true, 00:11:24.638 "write": true, 00:11:24.638 "write_zeroes": true 00:11:24.638 }, 00:11:24.638 "uuid": "bddd50dd-465f-4d7f-8836-44d86b96c147", 00:11:24.638 "zoned": false 00:11:24.638 }, 00:11:24.638 { 00:11:24.638 "aliases": [ 00:11:24.638 "2d9335aa-fd41-59aa-ae1e-1111e0bdd724" 00:11:24.638 ], 00:11:24.638 "assigned_rate_limits": { 00:11:24.638 "r_mbytes_per_sec": 0, 00:11:24.638 "rw_ios_per_sec": 0, 00:11:24.638 "rw_mbytes_per_sec": 0, 00:11:24.638 "w_mbytes_per_sec": 0 00:11:24.638 }, 00:11:24.638 "block_size": 512, 00:11:24.638 "claimed": false, 00:11:24.638 "driver_specific": { 00:11:24.638 "passthru": { 00:11:24.638 "base_bdev_name": "Malloc0", 00:11:24.638 "name": "Passthru0" 00:11:24.638 } 00:11:24.638 }, 00:11:24.638 "memory_domains": [ 00:11:24.638 { 00:11:24.638 "dma_device_id": "system", 00:11:24.638 "dma_device_type": 1 00:11:24.638 }, 00:11:24.638 { 00:11:24.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.638 "dma_device_type": 2 00:11:24.638 } 00:11:24.638 ], 00:11:24.638 "name": "Passthru0", 00:11:24.638 "num_blocks": 16384, 00:11:24.638 "product_name": "passthru", 00:11:24.638 "supported_io_types": { 00:11:24.638 "abort": true, 00:11:24.638 "compare": false, 00:11:24.638 "compare_and_write": false, 00:11:24.638 "flush": true, 00:11:24.638 "nvme_admin": false, 00:11:24.638 "nvme_io": false, 00:11:24.638 "read": true, 00:11:24.638 "reset": true, 00:11:24.638 "unmap": true, 00:11:24.638 "write": true, 00:11:24.638 "write_zeroes": true 00:11:24.638 }, 00:11:24.638 "uuid": "2d9335aa-fd41-59aa-ae1e-1111e0bdd724", 00:11:24.638 "zoned": false 00:11:24.638 } 00:11:24.638 ]' 00:11:24.638 13:54:04 -- rpc/rpc.sh@21 -- # jq length 00:11:24.896 13:54:04 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:24.896 13:54:04 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:24.896 13:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:24.896 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:11:24.896 13:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:24.896 13:54:04 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:11:24.896 13:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:24.896 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:11:24.896 13:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:24.896 13:54:04 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:24.896 13:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:24.896 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:11:24.896 13:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:24.896 13:54:04 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:24.896 13:54:04 -- rpc/rpc.sh@26 -- # jq length 00:11:24.896 13:54:04 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:24.896 00:11:24.896 real 0m0.357s 00:11:24.896 user 0m0.203s 00:11:24.896 sys 0m0.041s 00:11:24.896 13:54:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:24.896 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:11:24.896 ************************************ 00:11:24.896 END TEST rpc_integrity 00:11:24.896 ************************************ 00:11:24.896 13:54:04 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:11:24.896 13:54:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:24.896 13:54:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:24.896 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:11:25.155 ************************************ 00:11:25.155 START TEST rpc_plugins 00:11:25.155 ************************************ 00:11:25.155 13:54:04 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:11:25.155 13:54:04 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:11:25.155 13:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.155 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:11:25.155 13:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.155 13:54:04 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:11:25.155 13:54:04 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:11:25.155 13:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.155 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:11:25.155 13:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.155 13:54:04 -- rpc/rpc.sh@31 -- # bdevs='[ 00:11:25.155 { 00:11:25.155 "aliases": [ 00:11:25.155 "bc8b553a-6abc-406e-9c37-b36255dfa752" 00:11:25.155 ], 00:11:25.155 "assigned_rate_limits": { 00:11:25.155 "r_mbytes_per_sec": 0, 00:11:25.155 "rw_ios_per_sec": 0, 00:11:25.155 "rw_mbytes_per_sec": 0, 00:11:25.155 "w_mbytes_per_sec": 0 00:11:25.155 }, 00:11:25.155 "block_size": 4096, 00:11:25.155 "claimed": false, 00:11:25.155 "driver_specific": {}, 00:11:25.155 "memory_domains": [ 00:11:25.155 { 00:11:25.155 "dma_device_id": "system", 00:11:25.155 "dma_device_type": 1 00:11:25.155 }, 00:11:25.155 { 00:11:25.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.155 "dma_device_type": 2 00:11:25.155 } 00:11:25.155 ], 00:11:25.155 "name": "Malloc1", 00:11:25.155 "num_blocks": 256, 00:11:25.155 "product_name": "Malloc disk", 00:11:25.155 "supported_io_types": { 00:11:25.155 "abort": true, 00:11:25.155 "compare": false, 00:11:25.155 "compare_and_write": false, 00:11:25.155 "flush": true, 00:11:25.155 "nvme_admin": false, 00:11:25.155 "nvme_io": false, 00:11:25.155 "read": true, 00:11:25.155 "reset": true, 00:11:25.155 "unmap": true, 00:11:25.155 "write": true, 00:11:25.155 "write_zeroes": true 00:11:25.155 }, 00:11:25.155 "uuid": "bc8b553a-6abc-406e-9c37-b36255dfa752", 00:11:25.155 "zoned": false 00:11:25.155 } 00:11:25.155 ]' 00:11:25.155 13:54:04 -- rpc/rpc.sh@32 -- # jq length 00:11:25.155 13:54:04 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:11:25.155 13:54:04 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:11:25.155 13:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.155 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:11:25.155 13:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.155 13:54:04 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:11:25.155 13:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.155 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:11:25.155 13:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.155 13:54:04 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:11:25.155 13:54:04 -- rpc/rpc.sh@36 -- # jq length 00:11:25.155 13:54:04 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:11:25.155 00:11:25.155 real 0m0.150s 00:11:25.155 user 0m0.080s 00:11:25.155 sys 0m0.031s 00:11:25.155 13:54:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:25.155 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:11:25.155 ************************************ 00:11:25.155 END TEST rpc_plugins 00:11:25.155 ************************************ 00:11:25.155 13:54:04 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:11:25.155 13:54:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:25.155 13:54:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:25.155 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:11:25.415 ************************************ 00:11:25.415 START TEST rpc_trace_cmd_test 00:11:25.415 ************************************ 00:11:25.415 13:54:04 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:11:25.415 13:54:04 -- rpc/rpc.sh@40 -- # local info 00:11:25.415 13:54:04 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:11:25.415 13:54:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.415 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:11:25.415 13:54:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.415 13:54:04 -- rpc/rpc.sh@42 -- # info='{ 00:11:25.415 "bdev": { 00:11:25.415 "mask": "0x8", 00:11:25.415 "tpoint_mask": "0xffffffffffffffff" 00:11:25.415 }, 00:11:25.415 "bdev_nvme": { 00:11:25.415 "mask": "0x4000", 00:11:25.415 "tpoint_mask": "0x0" 00:11:25.415 }, 00:11:25.415 "blobfs": { 00:11:25.415 "mask": "0x80", 00:11:25.415 "tpoint_mask": "0x0" 00:11:25.415 }, 00:11:25.415 "dsa": { 00:11:25.415 "mask": "0x200", 00:11:25.415 "tpoint_mask": "0x0" 00:11:25.415 }, 00:11:25.415 "ftl": { 00:11:25.415 "mask": "0x40", 00:11:25.415 "tpoint_mask": "0x0" 00:11:25.415 }, 00:11:25.415 "iaa": { 00:11:25.415 "mask": "0x1000", 00:11:25.415 "tpoint_mask": "0x0" 00:11:25.415 }, 00:11:25.415 "iscsi_conn": { 00:11:25.415 "mask": "0x2", 00:11:25.415 "tpoint_mask": "0x0" 00:11:25.415 }, 00:11:25.415 "nvme_pcie": { 00:11:25.415 "mask": "0x800", 00:11:25.415 "tpoint_mask": "0x0" 00:11:25.415 }, 00:11:25.415 "nvme_tcp": { 00:11:25.415 "mask": "0x2000", 00:11:25.415 "tpoint_mask": "0x0" 00:11:25.415 }, 00:11:25.415 "nvmf_rdma": { 00:11:25.415 "mask": "0x10", 00:11:25.415 "tpoint_mask": "0x0" 00:11:25.415 }, 00:11:25.415 "nvmf_tcp": { 00:11:25.415 "mask": "0x20", 00:11:25.415 "tpoint_mask": "0x0" 00:11:25.415 }, 00:11:25.415 "scsi": { 00:11:25.415 "mask": "0x4", 00:11:25.415 "tpoint_mask": "0x0" 00:11:25.415 }, 00:11:25.415 "sock": { 00:11:25.415 "mask": "0x8000", 00:11:25.415 "tpoint_mask": "0x0" 00:11:25.415 }, 00:11:25.415 "thread": { 00:11:25.415 "mask": "0x400", 00:11:25.415 "tpoint_mask": "0x0" 00:11:25.415 }, 00:11:25.415 "tpoint_group_mask": "0x8", 00:11:25.415 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60204" 00:11:25.415 }' 00:11:25.415 13:54:04 -- rpc/rpc.sh@43 -- # jq length 00:11:25.415 13:54:04 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:11:25.415 13:54:04 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:11:25.415 13:54:04 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:11:25.415 13:54:04 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:11:25.415 13:54:05 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:11:25.415 13:54:05 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:11:25.415 13:54:05 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:11:25.415 13:54:05 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:11:25.675 13:54:05 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:11:25.675 00:11:25.675 real 0m0.236s 00:11:25.675 user 0m0.183s 00:11:25.675 sys 0m0.042s 00:11:25.675 13:54:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:25.675 ************************************ 00:11:25.675 END TEST rpc_trace_cmd_test 00:11:25.675 13:54:05 -- common/autotest_common.sh@10 -- # set +x 00:11:25.675 ************************************ 00:11:25.675 13:54:05 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:11:25.675 13:54:05 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:11:25.675 13:54:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:25.675 13:54:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:25.675 13:54:05 -- common/autotest_common.sh@10 -- # set +x 00:11:25.675 ************************************ 00:11:25.675 START TEST go_rpc 00:11:25.675 ************************************ 00:11:25.675 13:54:05 -- common/autotest_common.sh@1111 -- # go_rpc 00:11:25.675 13:54:05 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:11:25.675 13:54:05 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:11:25.675 13:54:05 -- rpc/rpc.sh@52 -- # jq length 00:11:25.675 13:54:05 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:11:25.675 13:54:05 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:11:25.675 13:54:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.675 13:54:05 -- common/autotest_common.sh@10 -- # set +x 00:11:25.933 13:54:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.933 13:54:05 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:11:25.933 13:54:05 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:11:25.933 13:54:05 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["64c4e257-8410-4d58-bf7f-844f7720be59"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"64c4e257-8410-4d58-bf7f-844f7720be59","zoned":false}]' 00:11:25.933 13:54:05 -- rpc/rpc.sh@57 -- # jq length 00:11:25.933 13:54:05 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:11:25.933 13:54:05 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:11:25.933 13:54:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.933 13:54:05 -- common/autotest_common.sh@10 -- # set +x 00:11:25.933 13:54:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.933 13:54:05 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:11:25.933 13:54:05 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:11:25.933 13:54:05 -- rpc/rpc.sh@61 -- # jq length 00:11:25.933 13:54:05 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:11:25.933 00:11:25.933 real 0m0.255s 00:11:25.933 user 0m0.142s 00:11:25.933 sys 0m0.047s 00:11:25.933 13:54:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:25.933 13:54:05 -- common/autotest_common.sh@10 -- # set +x 00:11:25.933 ************************************ 00:11:25.933 END TEST go_rpc 00:11:25.933 ************************************ 00:11:25.933 13:54:05 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:11:25.933 13:54:05 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:11:25.933 13:54:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:25.933 13:54:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:25.933 13:54:05 -- common/autotest_common.sh@10 -- # set +x 00:11:26.192 ************************************ 00:11:26.192 START TEST rpc_daemon_integrity 00:11:26.192 ************************************ 00:11:26.192 13:54:05 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:11:26.192 13:54:05 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.192 13:54:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.192 13:54:05 -- common/autotest_common.sh@10 -- # set +x 00:11:26.192 13:54:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.192 13:54:05 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:26.192 13:54:05 -- rpc/rpc.sh@13 -- # jq length 00:11:26.192 13:54:05 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:26.192 13:54:05 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:26.192 13:54:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.192 13:54:05 -- common/autotest_common.sh@10 -- # set +x 00:11:26.193 13:54:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.193 13:54:05 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:11:26.193 13:54:05 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:26.193 13:54:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.193 13:54:05 -- common/autotest_common.sh@10 -- # set +x 00:11:26.193 13:54:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.193 13:54:05 -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:26.193 { 00:11:26.193 "aliases": [ 00:11:26.193 "b9379bad-4df5-4f47-b493-173b90de83ca" 00:11:26.193 ], 00:11:26.193 "assigned_rate_limits": { 00:11:26.193 "r_mbytes_per_sec": 0, 00:11:26.193 "rw_ios_per_sec": 0, 00:11:26.193 "rw_mbytes_per_sec": 0, 00:11:26.193 "w_mbytes_per_sec": 0 00:11:26.193 }, 00:11:26.193 "block_size": 512, 00:11:26.193 "claimed": false, 00:11:26.193 "driver_specific": {}, 00:11:26.193 "memory_domains": [ 00:11:26.193 { 00:11:26.193 "dma_device_id": "system", 00:11:26.193 "dma_device_type": 1 00:11:26.193 }, 00:11:26.193 { 00:11:26.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.193 "dma_device_type": 2 00:11:26.193 } 00:11:26.193 ], 00:11:26.193 "name": "Malloc3", 00:11:26.193 "num_blocks": 16384, 00:11:26.193 "product_name": "Malloc disk", 00:11:26.193 "supported_io_types": { 00:11:26.193 "abort": true, 00:11:26.193 "compare": false, 00:11:26.193 "compare_and_write": false, 00:11:26.193 "flush": true, 00:11:26.193 "nvme_admin": false, 00:11:26.193 "nvme_io": false, 00:11:26.193 "read": true, 00:11:26.193 "reset": true, 00:11:26.193 "unmap": true, 00:11:26.193 "write": true, 00:11:26.193 "write_zeroes": true 00:11:26.193 }, 00:11:26.193 "uuid": "b9379bad-4df5-4f47-b493-173b90de83ca", 00:11:26.193 "zoned": false 00:11:26.193 } 00:11:26.193 ]' 00:11:26.193 13:54:05 -- rpc/rpc.sh@17 -- # jq length 00:11:26.193 13:54:05 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:26.193 13:54:05 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:11:26.193 13:54:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.193 13:54:05 -- common/autotest_common.sh@10 -- # set +x 00:11:26.193 [2024-04-26 13:54:05.832901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:26.193 [2024-04-26 13:54:05.832965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.193 [2024-04-26 13:54:05.832998] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:11:26.193 [2024-04-26 13:54:05.833014] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.193 [2024-04-26 13:54:05.835595] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.193 [2024-04-26 13:54:05.835640] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:26.193 Passthru0 00:11:26.193 13:54:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.193 13:54:05 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:26.193 13:54:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.193 13:54:05 -- common/autotest_common.sh@10 -- # set +x 00:11:26.451 13:54:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.451 13:54:05 -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:26.451 { 00:11:26.451 "aliases": [ 00:11:26.451 "b9379bad-4df5-4f47-b493-173b90de83ca" 00:11:26.451 ], 00:11:26.451 "assigned_rate_limits": { 00:11:26.452 "r_mbytes_per_sec": 0, 00:11:26.452 "rw_ios_per_sec": 0, 00:11:26.452 "rw_mbytes_per_sec": 0, 00:11:26.452 "w_mbytes_per_sec": 0 00:11:26.452 }, 00:11:26.452 "block_size": 512, 00:11:26.452 "claim_type": "exclusive_write", 00:11:26.452 "claimed": true, 00:11:26.452 "driver_specific": {}, 00:11:26.452 "memory_domains": [ 00:11:26.452 { 00:11:26.452 "dma_device_id": "system", 00:11:26.452 "dma_device_type": 1 00:11:26.452 }, 00:11:26.452 { 00:11:26.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.452 "dma_device_type": 2 00:11:26.452 } 00:11:26.452 ], 00:11:26.452 "name": "Malloc3", 00:11:26.452 "num_blocks": 16384, 00:11:26.452 "product_name": "Malloc disk", 00:11:26.452 "supported_io_types": { 00:11:26.452 "abort": true, 00:11:26.452 "compare": false, 00:11:26.452 "compare_and_write": false, 00:11:26.452 "flush": true, 00:11:26.452 "nvme_admin": false, 00:11:26.452 "nvme_io": false, 00:11:26.452 "read": true, 00:11:26.452 "reset": true, 00:11:26.452 "unmap": true, 00:11:26.452 "write": true, 00:11:26.452 "write_zeroes": true 00:11:26.452 }, 00:11:26.452 "uuid": "b9379bad-4df5-4f47-b493-173b90de83ca", 00:11:26.452 "zoned": false 00:11:26.452 }, 00:11:26.452 { 00:11:26.452 "aliases": [ 00:11:26.452 "6a93b561-f6b7-5182-8880-71442ac21ae2" 00:11:26.452 ], 00:11:26.452 "assigned_rate_limits": { 00:11:26.452 "r_mbytes_per_sec": 0, 00:11:26.452 "rw_ios_per_sec": 0, 00:11:26.452 "rw_mbytes_per_sec": 0, 00:11:26.452 "w_mbytes_per_sec": 0 00:11:26.452 }, 00:11:26.452 "block_size": 512, 00:11:26.452 "claimed": false, 00:11:26.452 "driver_specific": { 00:11:26.452 "passthru": { 00:11:26.452 "base_bdev_name": "Malloc3", 00:11:26.452 "name": "Passthru0" 00:11:26.452 } 00:11:26.452 }, 00:11:26.452 "memory_domains": [ 00:11:26.452 { 00:11:26.452 "dma_device_id": "system", 00:11:26.452 "dma_device_type": 1 00:11:26.452 }, 00:11:26.452 { 00:11:26.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.452 "dma_device_type": 2 00:11:26.452 } 00:11:26.452 ], 00:11:26.452 "name": "Passthru0", 00:11:26.452 "num_blocks": 16384, 00:11:26.452 "product_name": "passthru", 00:11:26.452 "supported_io_types": { 00:11:26.452 "abort": true, 00:11:26.452 "compare": false, 00:11:26.452 "compare_and_write": false, 00:11:26.452 "flush": true, 00:11:26.452 "nvme_admin": false, 00:11:26.452 "nvme_io": false, 00:11:26.452 "read": true, 00:11:26.452 "reset": true, 00:11:26.452 "unmap": true, 00:11:26.452 "write": true, 00:11:26.452 "write_zeroes": true 00:11:26.452 }, 00:11:26.452 "uuid": "6a93b561-f6b7-5182-8880-71442ac21ae2", 00:11:26.452 "zoned": false 00:11:26.452 } 00:11:26.452 ]' 00:11:26.452 13:54:05 -- rpc/rpc.sh@21 -- # jq length 00:11:26.452 13:54:05 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:26.452 13:54:05 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:26.452 13:54:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.452 13:54:05 -- common/autotest_common.sh@10 -- # set +x 00:11:26.452 13:54:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.452 13:54:05 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:11:26.452 13:54:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.452 13:54:05 -- common/autotest_common.sh@10 -- # set +x 00:11:26.452 13:54:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.452 13:54:05 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:26.452 13:54:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:26.452 13:54:05 -- common/autotest_common.sh@10 -- # set +x 00:11:26.452 13:54:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:26.452 13:54:05 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:26.452 13:54:05 -- rpc/rpc.sh@26 -- # jq length 00:11:26.452 13:54:06 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:26.452 00:11:26.452 real 0m0.334s 00:11:26.452 user 0m0.158s 00:11:26.452 sys 0m0.066s 00:11:26.452 13:54:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:26.452 13:54:06 -- common/autotest_common.sh@10 -- # set +x 00:11:26.452 ************************************ 00:11:26.452 END TEST rpc_daemon_integrity 00:11:26.452 ************************************ 00:11:26.452 13:54:06 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:11:26.452 13:54:06 -- rpc/rpc.sh@84 -- # killprocess 60204 00:11:26.452 13:54:06 -- common/autotest_common.sh@936 -- # '[' -z 60204 ']' 00:11:26.452 13:54:06 -- common/autotest_common.sh@940 -- # kill -0 60204 00:11:26.452 13:54:06 -- common/autotest_common.sh@941 -- # uname 00:11:26.452 13:54:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:26.452 13:54:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60204 00:11:26.452 13:54:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:26.452 13:54:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:26.452 killing process with pid 60204 00:11:26.452 13:54:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60204' 00:11:26.452 13:54:06 -- common/autotest_common.sh@955 -- # kill 60204 00:11:26.452 13:54:06 -- common/autotest_common.sh@960 -- # wait 60204 00:11:29.743 00:11:29.743 real 0m6.321s 00:11:29.743 user 0m7.078s 00:11:29.743 sys 0m1.235s 00:11:29.743 13:54:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:29.743 13:54:08 -- common/autotest_common.sh@10 -- # set +x 00:11:29.743 ************************************ 00:11:29.743 END TEST rpc 00:11:29.743 ************************************ 00:11:29.743 13:54:08 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:29.743 13:54:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:29.743 13:54:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:29.743 13:54:08 -- common/autotest_common.sh@10 -- # set +x 00:11:29.743 ************************************ 00:11:29.743 START TEST skip_rpc 00:11:29.743 ************************************ 00:11:29.743 13:54:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:29.743 * Looking for test storage... 00:11:29.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:29.743 13:54:08 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:29.743 13:54:08 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:29.743 13:54:08 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:11:29.743 13:54:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:29.743 13:54:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:29.743 13:54:08 -- common/autotest_common.sh@10 -- # set +x 00:11:29.743 ************************************ 00:11:29.743 START TEST skip_rpc 00:11:29.743 ************************************ 00:11:29.743 13:54:09 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:11:29.743 13:54:09 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60530 00:11:29.743 13:54:09 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:11:29.743 13:54:09 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:29.743 13:54:09 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:11:29.743 [2024-04-26 13:54:09.160734] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:11:29.743 [2024-04-26 13:54:09.160844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60530 ] 00:11:29.743 [2024-04-26 13:54:09.332327] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.001 [2024-04-26 13:54:09.570084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.358 13:54:14 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:11:35.358 13:54:14 -- common/autotest_common.sh@638 -- # local es=0 00:11:35.358 13:54:14 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:11:35.358 13:54:14 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:11:35.358 13:54:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:35.358 13:54:14 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:11:35.358 13:54:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:35.358 13:54:14 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:11:35.358 13:54:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:35.358 13:54:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.358 2024/04/26 13:54:14 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:11:35.358 13:54:14 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:11:35.358 13:54:14 -- common/autotest_common.sh@641 -- # es=1 00:11:35.358 13:54:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:35.358 13:54:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:35.358 13:54:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:35.358 13:54:14 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:11:35.358 13:54:14 -- rpc/skip_rpc.sh@23 -- # killprocess 60530 00:11:35.358 13:54:14 -- common/autotest_common.sh@936 -- # '[' -z 60530 ']' 00:11:35.358 13:54:14 -- common/autotest_common.sh@940 -- # kill -0 60530 00:11:35.358 13:54:14 -- common/autotest_common.sh@941 -- # uname 00:11:35.358 13:54:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:35.358 13:54:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60530 00:11:35.358 13:54:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:35.358 killing process with pid 60530 00:11:35.358 13:54:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:35.358 13:54:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60530' 00:11:35.358 13:54:14 -- common/autotest_common.sh@955 -- # kill 60530 00:11:35.358 13:54:14 -- common/autotest_common.sh@960 -- # wait 60530 00:11:37.260 00:11:37.260 real 0m7.504s 00:11:37.260 user 0m7.034s 00:11:37.260 sys 0m0.383s 00:11:37.260 13:54:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:37.260 13:54:16 -- common/autotest_common.sh@10 -- # set +x 00:11:37.260 ************************************ 00:11:37.260 END TEST skip_rpc 00:11:37.260 ************************************ 00:11:37.260 13:54:16 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:11:37.260 13:54:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:37.260 13:54:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:37.260 13:54:16 -- common/autotest_common.sh@10 -- # set +x 00:11:37.260 ************************************ 00:11:37.260 START TEST skip_rpc_with_json 00:11:37.260 ************************************ 00:11:37.260 13:54:16 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:11:37.260 13:54:16 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:11:37.260 13:54:16 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60649 00:11:37.260 13:54:16 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:37.260 13:54:16 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:37.260 13:54:16 -- rpc/skip_rpc.sh@31 -- # waitforlisten 60649 00:11:37.260 13:54:16 -- common/autotest_common.sh@817 -- # '[' -z 60649 ']' 00:11:37.260 13:54:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.260 13:54:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:37.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.260 13:54:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.260 13:54:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:37.260 13:54:16 -- common/autotest_common.sh@10 -- # set +x 00:11:37.260 [2024-04-26 13:54:16.814964] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:11:37.260 [2024-04-26 13:54:16.815076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60649 ] 00:11:37.523 [2024-04-26 13:54:16.986569] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.784 [2024-04-26 13:54:17.219755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.721 13:54:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:38.721 13:54:18 -- common/autotest_common.sh@850 -- # return 0 00:11:38.721 13:54:18 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:11:38.721 13:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.721 13:54:18 -- common/autotest_common.sh@10 -- # set +x 00:11:38.721 [2024-04-26 13:54:18.180392] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:11:38.721 2024/04/26 13:54:18 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:11:38.721 request: 00:11:38.721 { 00:11:38.721 "method": "nvmf_get_transports", 00:11:38.721 "params": { 00:11:38.721 "trtype": "tcp" 00:11:38.721 } 00:11:38.721 } 00:11:38.721 Got JSON-RPC error response 00:11:38.721 GoRPCClient: error on JSON-RPC call 00:11:38.721 13:54:18 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:11:38.721 13:54:18 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:11:38.721 13:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.721 13:54:18 -- common/autotest_common.sh@10 -- # set +x 00:11:38.721 [2024-04-26 13:54:18.196412] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.721 13:54:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.721 13:54:18 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:11:38.721 13:54:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.721 13:54:18 -- common/autotest_common.sh@10 -- # set +x 00:11:38.721 13:54:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.721 13:54:18 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:38.721 { 00:11:38.721 "subsystems": [ 00:11:38.721 { 00:11:38.721 "subsystem": "keyring", 00:11:38.721 "config": [] 00:11:38.721 }, 00:11:38.721 { 00:11:38.721 "subsystem": "iobuf", 00:11:38.721 "config": [ 00:11:38.721 { 00:11:38.721 "method": "iobuf_set_options", 00:11:38.721 "params": { 00:11:38.721 "large_bufsize": 135168, 00:11:38.721 "large_pool_count": 1024, 00:11:38.721 "small_bufsize": 8192, 00:11:38.721 "small_pool_count": 8192 00:11:38.721 } 00:11:38.721 } 00:11:38.722 ] 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "subsystem": "sock", 00:11:38.722 "config": [ 00:11:38.722 { 00:11:38.722 "method": "sock_impl_set_options", 00:11:38.722 "params": { 00:11:38.722 "enable_ktls": false, 00:11:38.722 "enable_placement_id": 0, 00:11:38.722 "enable_quickack": false, 00:11:38.722 "enable_recv_pipe": true, 00:11:38.722 "enable_zerocopy_send_client": false, 00:11:38.722 "enable_zerocopy_send_server": true, 00:11:38.722 "impl_name": "posix", 00:11:38.722 "recv_buf_size": 2097152, 00:11:38.722 "send_buf_size": 2097152, 00:11:38.722 "tls_version": 0, 00:11:38.722 "zerocopy_threshold": 0 00:11:38.722 } 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "method": "sock_impl_set_options", 00:11:38.722 "params": { 00:11:38.722 "enable_ktls": false, 00:11:38.722 "enable_placement_id": 0, 00:11:38.722 "enable_quickack": false, 00:11:38.722 "enable_recv_pipe": true, 00:11:38.722 "enable_zerocopy_send_client": false, 00:11:38.722 "enable_zerocopy_send_server": true, 00:11:38.722 "impl_name": "ssl", 00:11:38.722 "recv_buf_size": 4096, 00:11:38.722 "send_buf_size": 4096, 00:11:38.722 "tls_version": 0, 00:11:38.722 "zerocopy_threshold": 0 00:11:38.722 } 00:11:38.722 } 00:11:38.722 ] 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "subsystem": "vmd", 00:11:38.722 "config": [] 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "subsystem": "accel", 00:11:38.722 "config": [ 00:11:38.722 { 00:11:38.722 "method": "accel_set_options", 00:11:38.722 "params": { 00:11:38.722 "buf_count": 2048, 00:11:38.722 "large_cache_size": 16, 00:11:38.722 "sequence_count": 2048, 00:11:38.722 "small_cache_size": 128, 00:11:38.722 "task_count": 2048 00:11:38.722 } 00:11:38.722 } 00:11:38.722 ] 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "subsystem": "bdev", 00:11:38.722 "config": [ 00:11:38.722 { 00:11:38.722 "method": "bdev_set_options", 00:11:38.722 "params": { 00:11:38.722 "bdev_auto_examine": true, 00:11:38.722 "bdev_io_cache_size": 256, 00:11:38.722 "bdev_io_pool_size": 65535, 00:11:38.722 "iobuf_large_cache_size": 16, 00:11:38.722 "iobuf_small_cache_size": 128 00:11:38.722 } 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "method": "bdev_raid_set_options", 00:11:38.722 "params": { 00:11:38.722 "process_window_size_kb": 1024 00:11:38.722 } 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "method": "bdev_iscsi_set_options", 00:11:38.722 "params": { 00:11:38.722 "timeout_sec": 30 00:11:38.722 } 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "method": "bdev_nvme_set_options", 00:11:38.722 "params": { 00:11:38.722 "action_on_timeout": "none", 00:11:38.722 "allow_accel_sequence": false, 00:11:38.722 "arbitration_burst": 0, 00:11:38.722 "bdev_retry_count": 3, 00:11:38.722 "ctrlr_loss_timeout_sec": 0, 00:11:38.722 "delay_cmd_submit": true, 00:11:38.722 "dhchap_dhgroups": [ 00:11:38.722 "null", 00:11:38.722 "ffdhe2048", 00:11:38.722 "ffdhe3072", 00:11:38.722 "ffdhe4096", 00:11:38.722 "ffdhe6144", 00:11:38.722 "ffdhe8192" 00:11:38.722 ], 00:11:38.722 "dhchap_digests": [ 00:11:38.722 "sha256", 00:11:38.722 "sha384", 00:11:38.722 "sha512" 00:11:38.722 ], 00:11:38.722 "disable_auto_failback": false, 00:11:38.722 "fast_io_fail_timeout_sec": 0, 00:11:38.722 "generate_uuids": false, 00:11:38.722 "high_priority_weight": 0, 00:11:38.722 "io_path_stat": false, 00:11:38.722 "io_queue_requests": 0, 00:11:38.722 "keep_alive_timeout_ms": 10000, 00:11:38.722 "low_priority_weight": 0, 00:11:38.722 "medium_priority_weight": 0, 00:11:38.722 "nvme_adminq_poll_period_us": 10000, 00:11:38.722 "nvme_error_stat": false, 00:11:38.722 "nvme_ioq_poll_period_us": 0, 00:11:38.722 "rdma_cm_event_timeout_ms": 0, 00:11:38.722 "rdma_max_cq_size": 0, 00:11:38.722 "rdma_srq_size": 0, 00:11:38.722 "reconnect_delay_sec": 0, 00:11:38.722 "timeout_admin_us": 0, 00:11:38.722 "timeout_us": 0, 00:11:38.722 "transport_ack_timeout": 0, 00:11:38.722 "transport_retry_count": 4, 00:11:38.722 "transport_tos": 0 00:11:38.722 } 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "method": "bdev_nvme_set_hotplug", 00:11:38.722 "params": { 00:11:38.722 "enable": false, 00:11:38.722 "period_us": 100000 00:11:38.722 } 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "method": "bdev_wait_for_examine" 00:11:38.722 } 00:11:38.722 ] 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "subsystem": "scsi", 00:11:38.722 "config": null 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "subsystem": "scheduler", 00:11:38.722 "config": [ 00:11:38.722 { 00:11:38.722 "method": "framework_set_scheduler", 00:11:38.722 "params": { 00:11:38.722 "name": "static" 00:11:38.722 } 00:11:38.722 } 00:11:38.722 ] 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "subsystem": "vhost_scsi", 00:11:38.722 "config": [] 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "subsystem": "vhost_blk", 00:11:38.722 "config": [] 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "subsystem": "ublk", 00:11:38.722 "config": [] 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "subsystem": "nbd", 00:11:38.722 "config": [] 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "subsystem": "nvmf", 00:11:38.722 "config": [ 00:11:38.722 { 00:11:38.722 "method": "nvmf_set_config", 00:11:38.722 "params": { 00:11:38.722 "admin_cmd_passthru": { 00:11:38.722 "identify_ctrlr": false 00:11:38.722 }, 00:11:38.722 "discovery_filter": "match_any" 00:11:38.722 } 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "method": "nvmf_set_max_subsystems", 00:11:38.722 "params": { 00:11:38.722 "max_subsystems": 1024 00:11:38.722 } 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "method": "nvmf_set_crdt", 00:11:38.722 "params": { 00:11:38.722 "crdt1": 0, 00:11:38.722 "crdt2": 0, 00:11:38.722 "crdt3": 0 00:11:38.722 } 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "method": "nvmf_create_transport", 00:11:38.722 "params": { 00:11:38.722 "abort_timeout_sec": 1, 00:11:38.722 "ack_timeout": 0, 00:11:38.722 "buf_cache_size": 4294967295, 00:11:38.722 "c2h_success": true, 00:11:38.722 "data_wr_pool_size": 0, 00:11:38.722 "dif_insert_or_strip": false, 00:11:38.722 "in_capsule_data_size": 4096, 00:11:38.722 "io_unit_size": 131072, 00:11:38.722 "max_aq_depth": 128, 00:11:38.722 "max_io_qpairs_per_ctrlr": 127, 00:11:38.722 "max_io_size": 131072, 00:11:38.722 "max_queue_depth": 128, 00:11:38.722 "num_shared_buffers": 511, 00:11:38.722 "sock_priority": 0, 00:11:38.722 "trtype": "TCP", 00:11:38.722 "zcopy": false 00:11:38.722 } 00:11:38.722 } 00:11:38.722 ] 00:11:38.722 }, 00:11:38.722 { 00:11:38.722 "subsystem": "iscsi", 00:11:38.722 "config": [ 00:11:38.722 { 00:11:38.722 "method": "iscsi_set_options", 00:11:38.722 "params": { 00:11:38.722 "allow_duplicated_isid": false, 00:11:38.722 "chap_group": 0, 00:11:38.722 "data_out_pool_size": 2048, 00:11:38.722 "default_time2retain": 20, 00:11:38.722 "default_time2wait": 2, 00:11:38.722 "disable_chap": false, 00:11:38.722 "error_recovery_level": 0, 00:11:38.722 "first_burst_length": 8192, 00:11:38.722 "immediate_data": true, 00:11:38.722 "immediate_data_pool_size": 16384, 00:11:38.722 "max_connections_per_session": 2, 00:11:38.722 "max_large_datain_per_connection": 64, 00:11:38.722 "max_queue_depth": 64, 00:11:38.722 "max_r2t_per_connection": 4, 00:11:38.722 "max_sessions": 128, 00:11:38.722 "mutual_chap": false, 00:11:38.722 "node_base": "iqn.2016-06.io.spdk", 00:11:38.722 "nop_in_interval": 30, 00:11:38.722 "nop_timeout": 60, 00:11:38.722 "pdu_pool_size": 36864, 00:11:38.722 "require_chap": false 00:11:38.722 } 00:11:38.722 } 00:11:38.722 ] 00:11:38.722 } 00:11:38.722 ] 00:11:38.722 } 00:11:38.722 13:54:18 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:38.722 13:54:18 -- rpc/skip_rpc.sh@40 -- # killprocess 60649 00:11:38.722 13:54:18 -- common/autotest_common.sh@936 -- # '[' -z 60649 ']' 00:11:38.722 13:54:18 -- common/autotest_common.sh@940 -- # kill -0 60649 00:11:38.722 13:54:18 -- common/autotest_common.sh@941 -- # uname 00:11:38.722 13:54:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:38.722 13:54:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60649 00:11:38.981 13:54:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:38.981 killing process with pid 60649 00:11:38.981 13:54:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:38.981 13:54:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60649' 00:11:38.981 13:54:18 -- common/autotest_common.sh@955 -- # kill 60649 00:11:38.981 13:54:18 -- common/autotest_common.sh@960 -- # wait 60649 00:11:41.518 13:54:20 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60717 00:11:41.518 13:54:20 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:41.518 13:54:20 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:11:46.789 13:54:25 -- rpc/skip_rpc.sh@50 -- # killprocess 60717 00:11:46.789 13:54:25 -- common/autotest_common.sh@936 -- # '[' -z 60717 ']' 00:11:46.789 13:54:25 -- common/autotest_common.sh@940 -- # kill -0 60717 00:11:46.789 13:54:25 -- common/autotest_common.sh@941 -- # uname 00:11:46.789 13:54:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:46.789 13:54:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60717 00:11:46.789 killing process with pid 60717 00:11:46.789 13:54:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:46.789 13:54:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:46.789 13:54:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60717' 00:11:46.789 13:54:25 -- common/autotest_common.sh@955 -- # kill 60717 00:11:46.789 13:54:25 -- common/autotest_common.sh@960 -- # wait 60717 00:11:48.760 13:54:28 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:48.760 13:54:28 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:48.760 ************************************ 00:11:48.760 END TEST skip_rpc_with_json 00:11:48.760 ************************************ 00:11:48.760 00:11:48.760 real 0m11.606s 00:11:48.760 user 0m10.999s 00:11:48.760 sys 0m0.892s 00:11:48.760 13:54:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:48.760 13:54:28 -- common/autotest_common.sh@10 -- # set +x 00:11:48.760 13:54:28 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:11:48.760 13:54:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:48.760 13:54:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:48.760 13:54:28 -- common/autotest_common.sh@10 -- # set +x 00:11:49.020 ************************************ 00:11:49.020 START TEST skip_rpc_with_delay 00:11:49.020 ************************************ 00:11:49.020 13:54:28 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:11:49.020 13:54:28 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:49.020 13:54:28 -- common/autotest_common.sh@638 -- # local es=0 00:11:49.020 13:54:28 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:49.020 13:54:28 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:49.020 13:54:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:49.020 13:54:28 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:49.020 13:54:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:49.020 13:54:28 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:49.020 13:54:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:49.020 13:54:28 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:49.020 13:54:28 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:49.020 13:54:28 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:11:49.020 [2024-04-26 13:54:28.582430] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:11:49.020 [2024-04-26 13:54:28.582582] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:11:49.020 13:54:28 -- common/autotest_common.sh@641 -- # es=1 00:11:49.020 13:54:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:49.020 13:54:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:49.020 13:54:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:49.020 00:11:49.020 real 0m0.195s 00:11:49.020 user 0m0.099s 00:11:49.020 sys 0m0.094s 00:11:49.020 13:54:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:49.020 ************************************ 00:11:49.020 END TEST skip_rpc_with_delay 00:11:49.020 ************************************ 00:11:49.020 13:54:28 -- common/autotest_common.sh@10 -- # set +x 00:11:49.279 13:54:28 -- rpc/skip_rpc.sh@77 -- # uname 00:11:49.279 13:54:28 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:11:49.279 13:54:28 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:11:49.279 13:54:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:49.279 13:54:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:49.279 13:54:28 -- common/autotest_common.sh@10 -- # set +x 00:11:49.279 ************************************ 00:11:49.279 START TEST exit_on_failed_rpc_init 00:11:49.279 ************************************ 00:11:49.279 13:54:28 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:11:49.279 13:54:28 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=60860 00:11:49.279 13:54:28 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:49.279 13:54:28 -- rpc/skip_rpc.sh@63 -- # waitforlisten 60860 00:11:49.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.279 13:54:28 -- common/autotest_common.sh@817 -- # '[' -z 60860 ']' 00:11:49.279 13:54:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.279 13:54:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:49.279 13:54:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.279 13:54:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:49.279 13:54:28 -- common/autotest_common.sh@10 -- # set +x 00:11:49.279 [2024-04-26 13:54:28.913571] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:11:49.279 [2024-04-26 13:54:28.913702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60860 ] 00:11:49.538 [2024-04-26 13:54:29.082937] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.797 [2024-04-26 13:54:29.319668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.734 13:54:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:50.734 13:54:30 -- common/autotest_common.sh@850 -- # return 0 00:11:50.734 13:54:30 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:50.734 13:54:30 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:50.734 13:54:30 -- common/autotest_common.sh@638 -- # local es=0 00:11:50.734 13:54:30 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:50.734 13:54:30 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:50.734 13:54:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:50.734 13:54:30 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:50.734 13:54:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:50.734 13:54:30 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:50.734 13:54:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:50.734 13:54:30 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:50.734 13:54:30 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:11:50.734 13:54:30 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:11:50.994 [2024-04-26 13:54:30.420871] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:11:50.994 [2024-04-26 13:54:30.420999] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60901 ] 00:11:50.994 [2024-04-26 13:54:30.589727] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.253 [2024-04-26 13:54:30.826347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.253 [2024-04-26 13:54:30.826447] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:51.253 [2024-04-26 13:54:30.826463] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:51.253 [2024-04-26 13:54:30.826477] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:51.821 13:54:31 -- common/autotest_common.sh@641 -- # es=234 00:11:51.821 13:54:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:51.821 13:54:31 -- common/autotest_common.sh@650 -- # es=106 00:11:51.821 13:54:31 -- common/autotest_common.sh@651 -- # case "$es" in 00:11:51.821 13:54:31 -- common/autotest_common.sh@658 -- # es=1 00:11:51.821 13:54:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:51.821 13:54:31 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:51.821 13:54:31 -- rpc/skip_rpc.sh@70 -- # killprocess 60860 00:11:51.821 13:54:31 -- common/autotest_common.sh@936 -- # '[' -z 60860 ']' 00:11:51.821 13:54:31 -- common/autotest_common.sh@940 -- # kill -0 60860 00:11:51.821 13:54:31 -- common/autotest_common.sh@941 -- # uname 00:11:51.821 13:54:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:51.821 13:54:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60860 00:11:51.821 13:54:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:51.821 13:54:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:51.821 killing process with pid 60860 00:11:51.821 13:54:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60860' 00:11:51.821 13:54:31 -- common/autotest_common.sh@955 -- # kill 60860 00:11:51.822 13:54:31 -- common/autotest_common.sh@960 -- # wait 60860 00:11:54.366 00:11:54.366 real 0m4.986s 00:11:54.366 user 0m5.618s 00:11:54.366 sys 0m0.624s 00:11:54.366 13:54:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:54.366 ************************************ 00:11:54.366 END TEST exit_on_failed_rpc_init 00:11:54.366 ************************************ 00:11:54.366 13:54:33 -- common/autotest_common.sh@10 -- # set +x 00:11:54.366 13:54:33 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:54.366 00:11:54.366 real 0m25.016s 00:11:54.366 user 0m23.994s 00:11:54.366 sys 0m2.409s 00:11:54.366 13:54:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:54.366 ************************************ 00:11:54.366 END TEST skip_rpc 00:11:54.366 ************************************ 00:11:54.366 13:54:33 -- common/autotest_common.sh@10 -- # set +x 00:11:54.366 13:54:33 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:54.366 13:54:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:54.366 13:54:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:54.366 13:54:33 -- common/autotest_common.sh@10 -- # set +x 00:11:54.366 ************************************ 00:11:54.366 START TEST rpc_client 00:11:54.366 ************************************ 00:11:54.366 13:54:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:11:54.626 * Looking for test storage... 00:11:54.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:11:54.626 13:54:34 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:11:54.626 OK 00:11:54.626 13:54:34 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:11:54.626 00:11:54.626 real 0m0.211s 00:11:54.626 user 0m0.105s 00:11:54.626 sys 0m0.115s 00:11:54.626 13:54:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:54.626 13:54:34 -- common/autotest_common.sh@10 -- # set +x 00:11:54.626 ************************************ 00:11:54.626 END TEST rpc_client 00:11:54.626 ************************************ 00:11:54.626 13:54:34 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:54.626 13:54:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:54.626 13:54:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:54.626 13:54:34 -- common/autotest_common.sh@10 -- # set +x 00:11:54.885 ************************************ 00:11:54.885 START TEST json_config 00:11:54.885 ************************************ 00:11:54.885 13:54:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:11:54.885 13:54:34 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:54.885 13:54:34 -- nvmf/common.sh@7 -- # uname -s 00:11:54.885 13:54:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.885 13:54:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.885 13:54:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.885 13:54:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.885 13:54:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.885 13:54:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.885 13:54:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.885 13:54:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.885 13:54:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.885 13:54:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.885 13:54:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:11:54.885 13:54:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:11:54.885 13:54:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.885 13:54:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.885 13:54:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:54.885 13:54:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.885 13:54:34 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:54.885 13:54:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.885 13:54:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.885 13:54:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.885 13:54:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.885 13:54:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.885 13:54:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.885 13:54:34 -- paths/export.sh@5 -- # export PATH 00:11:54.885 13:54:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.885 13:54:34 -- nvmf/common.sh@47 -- # : 0 00:11:54.885 13:54:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:54.885 13:54:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:54.885 13:54:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.885 13:54:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.885 13:54:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.885 13:54:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:54.885 13:54:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:54.885 13:54:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:54.885 13:54:34 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:11:54.885 13:54:34 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:11:54.885 13:54:34 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:11:54.885 13:54:34 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:11:54.885 13:54:34 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:11:54.885 13:54:34 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:11:54.885 13:54:34 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:11:54.885 13:54:34 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:11:54.885 13:54:34 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:11:54.885 13:54:34 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:11:54.885 13:54:34 -- json_config/json_config.sh@33 -- # declare -A app_params 00:11:54.885 13:54:34 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:11:54.885 13:54:34 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:11:54.885 13:54:34 -- json_config/json_config.sh@40 -- # last_event_id=0 00:11:54.885 INFO: JSON configuration test init 00:11:54.885 13:54:34 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:11:54.885 13:54:34 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:11:54.885 13:54:34 -- json_config/json_config.sh@357 -- # json_config_test_init 00:11:54.885 13:54:34 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:11:54.885 13:54:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:54.885 13:54:34 -- common/autotest_common.sh@10 -- # set +x 00:11:54.885 13:54:34 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:11:54.885 13:54:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:54.885 13:54:34 -- common/autotest_common.sh@10 -- # set +x 00:11:54.885 13:54:34 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:11:54.885 13:54:34 -- json_config/common.sh@9 -- # local app=target 00:11:54.885 13:54:34 -- json_config/common.sh@10 -- # shift 00:11:54.885 13:54:34 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:11:54.885 13:54:34 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:11:54.885 13:54:34 -- json_config/common.sh@15 -- # local app_extra_params= 00:11:54.885 13:54:34 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:54.885 13:54:34 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:11:54.885 13:54:34 -- json_config/common.sh@22 -- # app_pid["$app"]=61064 00:11:54.885 13:54:34 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:11:54.885 13:54:34 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:11:54.885 Waiting for target to run... 00:11:54.885 13:54:34 -- json_config/common.sh@25 -- # waitforlisten 61064 /var/tmp/spdk_tgt.sock 00:11:54.885 13:54:34 -- common/autotest_common.sh@817 -- # '[' -z 61064 ']' 00:11:54.885 13:54:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:11:54.885 13:54:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:54.885 13:54:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:11:54.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:11:54.885 13:54:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:54.885 13:54:34 -- common/autotest_common.sh@10 -- # set +x 00:11:55.144 [2024-04-26 13:54:34.587874] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:11:55.144 [2024-04-26 13:54:34.588179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61064 ] 00:11:55.402 [2024-04-26 13:54:34.977204] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.660 [2024-04-26 13:54:35.195261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.918 00:11:55.918 13:54:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:55.918 13:54:35 -- common/autotest_common.sh@850 -- # return 0 00:11:55.918 13:54:35 -- json_config/common.sh@26 -- # echo '' 00:11:55.918 13:54:35 -- json_config/json_config.sh@269 -- # create_accel_config 00:11:55.918 13:54:35 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:11:55.918 13:54:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:55.918 13:54:35 -- common/autotest_common.sh@10 -- # set +x 00:11:55.918 13:54:35 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:11:55.918 13:54:35 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:11:55.918 13:54:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:55.918 13:54:35 -- common/autotest_common.sh@10 -- # set +x 00:11:55.918 13:54:35 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:11:55.918 13:54:35 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:11:55.918 13:54:35 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:11:57.289 13:54:36 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:11:57.289 13:54:36 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:11:57.289 13:54:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:57.289 13:54:36 -- common/autotest_common.sh@10 -- # set +x 00:11:57.289 13:54:36 -- json_config/json_config.sh@45 -- # local ret=0 00:11:57.289 13:54:36 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:11:57.289 13:54:36 -- json_config/json_config.sh@46 -- # local enabled_types 00:11:57.289 13:54:36 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:11:57.289 13:54:36 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:11:57.289 13:54:36 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:11:57.289 13:54:36 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:11:57.289 13:54:36 -- json_config/json_config.sh@48 -- # local get_types 00:11:57.289 13:54:36 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:11:57.289 13:54:36 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:11:57.289 13:54:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:57.289 13:54:36 -- common/autotest_common.sh@10 -- # set +x 00:11:57.289 13:54:36 -- json_config/json_config.sh@55 -- # return 0 00:11:57.289 13:54:36 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:11:57.289 13:54:36 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:11:57.289 13:54:36 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:11:57.289 13:54:36 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:11:57.289 13:54:36 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:11:57.289 13:54:36 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:11:57.289 13:54:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:57.289 13:54:36 -- common/autotest_common.sh@10 -- # set +x 00:11:57.289 13:54:36 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:11:57.289 13:54:36 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:11:57.289 13:54:36 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:11:57.289 13:54:36 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:11:57.289 13:54:36 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:11:57.547 MallocForNvmf0 00:11:57.547 13:54:37 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:11:57.547 13:54:37 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:11:57.804 MallocForNvmf1 00:11:57.804 13:54:37 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:11:57.804 13:54:37 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:11:58.061 [2024-04-26 13:54:37.511417] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.061 13:54:37 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:58.061 13:54:37 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:58.061 13:54:37 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:11:58.061 13:54:37 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:11:58.318 13:54:37 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:11:58.318 13:54:37 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:11:58.576 13:54:38 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:11:58.576 13:54:38 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:11:58.576 [2024-04-26 13:54:38.230697] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:11:58.834 13:54:38 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:11:58.834 13:54:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:58.834 13:54:38 -- common/autotest_common.sh@10 -- # set +x 00:11:58.834 13:54:38 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:11:58.834 13:54:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:58.834 13:54:38 -- common/autotest_common.sh@10 -- # set +x 00:11:58.834 13:54:38 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:11:58.834 13:54:38 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:58.834 13:54:38 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:11:59.092 MallocBdevForConfigChangeCheck 00:11:59.092 13:54:38 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:11:59.092 13:54:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:59.092 13:54:38 -- common/autotest_common.sh@10 -- # set +x 00:11:59.092 13:54:38 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:11:59.092 13:54:38 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:59.359 INFO: shutting down applications... 00:11:59.359 13:54:38 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:11:59.359 13:54:38 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:11:59.359 13:54:38 -- json_config/json_config.sh@368 -- # json_config_clear target 00:11:59.359 13:54:38 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:11:59.359 13:54:38 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:11:59.617 Calling clear_iscsi_subsystem 00:11:59.617 Calling clear_nvmf_subsystem 00:11:59.617 Calling clear_nbd_subsystem 00:11:59.617 Calling clear_ublk_subsystem 00:11:59.617 Calling clear_vhost_blk_subsystem 00:11:59.617 Calling clear_vhost_scsi_subsystem 00:11:59.617 Calling clear_bdev_subsystem 00:11:59.617 13:54:39 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:11:59.617 13:54:39 -- json_config/json_config.sh@343 -- # count=100 00:11:59.617 13:54:39 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:11:59.617 13:54:39 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:11:59.617 13:54:39 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:11:59.617 13:54:39 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:12:00.186 13:54:39 -- json_config/json_config.sh@345 -- # break 00:12:00.186 13:54:39 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:12:00.186 13:54:39 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:12:00.186 13:54:39 -- json_config/common.sh@31 -- # local app=target 00:12:00.186 13:54:39 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:12:00.186 13:54:39 -- json_config/common.sh@35 -- # [[ -n 61064 ]] 00:12:00.186 13:54:39 -- json_config/common.sh@38 -- # kill -SIGINT 61064 00:12:00.186 13:54:39 -- json_config/common.sh@40 -- # (( i = 0 )) 00:12:00.186 13:54:39 -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:00.186 13:54:39 -- json_config/common.sh@41 -- # kill -0 61064 00:12:00.186 13:54:39 -- json_config/common.sh@45 -- # sleep 0.5 00:12:00.754 13:54:40 -- json_config/common.sh@40 -- # (( i++ )) 00:12:00.754 13:54:40 -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:00.754 13:54:40 -- json_config/common.sh@41 -- # kill -0 61064 00:12:00.754 13:54:40 -- json_config/common.sh@45 -- # sleep 0.5 00:12:01.014 13:54:40 -- json_config/common.sh@40 -- # (( i++ )) 00:12:01.014 13:54:40 -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:01.014 13:54:40 -- json_config/common.sh@41 -- # kill -0 61064 00:12:01.014 13:54:40 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:12:01.014 13:54:40 -- json_config/common.sh@43 -- # break 00:12:01.014 13:54:40 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:12:01.014 SPDK target shutdown done 00:12:01.014 13:54:40 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:12:01.014 INFO: relaunching applications... 00:12:01.014 13:54:40 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:12:01.014 13:54:40 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:01.014 13:54:40 -- json_config/common.sh@9 -- # local app=target 00:12:01.014 13:54:40 -- json_config/common.sh@10 -- # shift 00:12:01.014 13:54:40 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:01.014 13:54:40 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:01.014 13:54:40 -- json_config/common.sh@15 -- # local app_extra_params= 00:12:01.014 13:54:40 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:01.014 13:54:40 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:01.014 13:54:40 -- json_config/common.sh@22 -- # app_pid["$app"]=61343 00:12:01.014 Waiting for target to run... 00:12:01.014 13:54:40 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:01.014 13:54:40 -- json_config/common.sh@25 -- # waitforlisten 61343 /var/tmp/spdk_tgt.sock 00:12:01.014 13:54:40 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:01.014 13:54:40 -- common/autotest_common.sh@817 -- # '[' -z 61343 ']' 00:12:01.014 13:54:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:01.014 13:54:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:01.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:01.014 13:54:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:01.014 13:54:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:01.014 13:54:40 -- common/autotest_common.sh@10 -- # set +x 00:12:01.272 [2024-04-26 13:54:40.757910] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:01.272 [2024-04-26 13:54:40.758086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61343 ] 00:12:01.532 [2024-04-26 13:54:41.152283] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.790 [2024-04-26 13:54:41.363410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.170 [2024-04-26 13:54:42.443500] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.170 [2024-04-26 13:54:42.475519] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:12:03.170 13:54:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:03.170 13:54:42 -- common/autotest_common.sh@850 -- # return 0 00:12:03.170 00:12:03.170 13:54:42 -- json_config/common.sh@26 -- # echo '' 00:12:03.170 13:54:42 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:12:03.170 INFO: Checking if target configuration is the same... 00:12:03.170 13:54:42 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:12:03.170 13:54:42 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:03.170 13:54:42 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:12:03.170 13:54:42 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:03.170 + '[' 2 -ne 2 ']' 00:12:03.170 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:12:03.170 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:12:03.170 + rootdir=/home/vagrant/spdk_repo/spdk 00:12:03.170 +++ basename /dev/fd/62 00:12:03.170 ++ mktemp /tmp/62.XXX 00:12:03.170 + tmp_file_1=/tmp/62.ppG 00:12:03.170 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:03.170 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:12:03.170 + tmp_file_2=/tmp/spdk_tgt_config.json.nH1 00:12:03.170 + ret=0 00:12:03.170 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:03.428 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:03.428 + diff -u /tmp/62.ppG /tmp/spdk_tgt_config.json.nH1 00:12:03.428 INFO: JSON config files are the same 00:12:03.428 + echo 'INFO: JSON config files are the same' 00:12:03.428 + rm /tmp/62.ppG /tmp/spdk_tgt_config.json.nH1 00:12:03.428 + exit 0 00:12:03.428 13:54:42 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:12:03.428 INFO: changing configuration and checking if this can be detected... 00:12:03.428 13:54:42 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:12:03.428 13:54:42 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:12:03.428 13:54:42 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:12:03.702 13:54:43 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:03.702 13:54:43 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:12:03.702 13:54:43 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:03.702 + '[' 2 -ne 2 ']' 00:12:03.702 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:12:03.702 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:12:03.702 + rootdir=/home/vagrant/spdk_repo/spdk 00:12:03.702 +++ basename /dev/fd/62 00:12:03.702 ++ mktemp /tmp/62.XXX 00:12:03.702 + tmp_file_1=/tmp/62.sMP 00:12:03.702 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:03.702 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:12:03.702 + tmp_file_2=/tmp/spdk_tgt_config.json.kl8 00:12:03.702 + ret=0 00:12:03.702 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:03.975 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:03.975 + diff -u /tmp/62.sMP /tmp/spdk_tgt_config.json.kl8 00:12:03.975 + ret=1 00:12:03.975 + echo '=== Start of file: /tmp/62.sMP ===' 00:12:03.975 + cat /tmp/62.sMP 00:12:03.975 + echo '=== End of file: /tmp/62.sMP ===' 00:12:03.975 + echo '' 00:12:03.975 + echo '=== Start of file: /tmp/spdk_tgt_config.json.kl8 ===' 00:12:03.975 + cat /tmp/spdk_tgt_config.json.kl8 00:12:03.975 + echo '=== End of file: /tmp/spdk_tgt_config.json.kl8 ===' 00:12:03.975 + echo '' 00:12:03.975 + rm /tmp/62.sMP /tmp/spdk_tgt_config.json.kl8 00:12:03.975 + exit 1 00:12:03.975 INFO: configuration change detected. 00:12:03.976 13:54:43 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:12:03.976 13:54:43 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:12:03.976 13:54:43 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:12:03.976 13:54:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:03.976 13:54:43 -- common/autotest_common.sh@10 -- # set +x 00:12:03.976 13:54:43 -- json_config/json_config.sh@307 -- # local ret=0 00:12:03.976 13:54:43 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:12:03.976 13:54:43 -- json_config/json_config.sh@317 -- # [[ -n 61343 ]] 00:12:03.976 13:54:43 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:12:03.976 13:54:43 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:12:03.976 13:54:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:03.976 13:54:43 -- common/autotest_common.sh@10 -- # set +x 00:12:03.976 13:54:43 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:12:03.976 13:54:43 -- json_config/json_config.sh@193 -- # uname -s 00:12:03.976 13:54:43 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:12:03.976 13:54:43 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:12:03.976 13:54:43 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:12:03.976 13:54:43 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:12:03.976 13:54:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:03.976 13:54:43 -- common/autotest_common.sh@10 -- # set +x 00:12:03.976 13:54:43 -- json_config/json_config.sh@323 -- # killprocess 61343 00:12:03.976 13:54:43 -- common/autotest_common.sh@936 -- # '[' -z 61343 ']' 00:12:03.976 13:54:43 -- common/autotest_common.sh@940 -- # kill -0 61343 00:12:03.976 13:54:43 -- common/autotest_common.sh@941 -- # uname 00:12:03.976 13:54:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:03.976 13:54:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61343 00:12:04.234 13:54:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:04.234 killing process with pid 61343 00:12:04.234 13:54:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:04.234 13:54:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61343' 00:12:04.234 13:54:43 -- common/autotest_common.sh@955 -- # kill 61343 00:12:04.234 13:54:43 -- common/autotest_common.sh@960 -- # wait 61343 00:12:05.168 13:54:44 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:05.168 13:54:44 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:12:05.168 13:54:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:05.168 13:54:44 -- common/autotest_common.sh@10 -- # set +x 00:12:05.168 13:54:44 -- json_config/json_config.sh@328 -- # return 0 00:12:05.168 INFO: Success 00:12:05.168 13:54:44 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:12:05.168 00:12:05.168 real 0m10.311s 00:12:05.168 user 0m12.712s 00:12:05.168 sys 0m2.291s 00:12:05.168 13:54:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:05.168 13:54:44 -- common/autotest_common.sh@10 -- # set +x 00:12:05.168 ************************************ 00:12:05.168 END TEST json_config 00:12:05.168 ************************************ 00:12:05.168 13:54:44 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:12:05.168 13:54:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:05.168 13:54:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:05.168 13:54:44 -- common/autotest_common.sh@10 -- # set +x 00:12:05.168 ************************************ 00:12:05.168 START TEST json_config_extra_key 00:12:05.168 ************************************ 00:12:05.168 13:54:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:12:05.430 13:54:44 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:05.430 13:54:44 -- nvmf/common.sh@7 -- # uname -s 00:12:05.430 13:54:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.430 13:54:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.430 13:54:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.430 13:54:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.430 13:54:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.430 13:54:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.430 13:54:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.430 13:54:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.430 13:54:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.430 13:54:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.430 13:54:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:12:05.430 13:54:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:12:05.430 13:54:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.430 13:54:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.430 13:54:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:05.430 13:54:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.430 13:54:44 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:05.430 13:54:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.430 13:54:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.430 13:54:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.430 13:54:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.430 13:54:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.430 13:54:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.430 13:54:44 -- paths/export.sh@5 -- # export PATH 00:12:05.430 13:54:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.430 13:54:44 -- nvmf/common.sh@47 -- # : 0 00:12:05.430 13:54:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:05.430 13:54:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:05.430 13:54:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.430 13:54:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.430 13:54:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.430 13:54:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:05.430 13:54:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:05.430 13:54:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:05.430 13:54:44 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:12:05.430 13:54:44 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:12:05.430 13:54:44 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:12:05.430 13:54:44 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:12:05.430 13:54:44 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:12:05.430 13:54:44 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:12:05.430 13:54:44 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:12:05.430 13:54:44 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:12:05.430 13:54:44 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:12:05.430 13:54:44 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:12:05.430 INFO: launching applications... 00:12:05.430 13:54:44 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:12:05.430 13:54:44 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:12:05.430 13:54:44 -- json_config/common.sh@9 -- # local app=target 00:12:05.430 13:54:44 -- json_config/common.sh@10 -- # shift 00:12:05.430 13:54:44 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:05.430 13:54:44 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:05.430 13:54:44 -- json_config/common.sh@15 -- # local app_extra_params= 00:12:05.430 13:54:44 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:05.430 13:54:44 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:05.430 Waiting for target to run... 00:12:05.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:05.430 13:54:44 -- json_config/common.sh@22 -- # app_pid["$app"]=61536 00:12:05.430 13:54:44 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:05.430 13:54:44 -- json_config/common.sh@25 -- # waitforlisten 61536 /var/tmp/spdk_tgt.sock 00:12:05.430 13:54:44 -- common/autotest_common.sh@817 -- # '[' -z 61536 ']' 00:12:05.430 13:54:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:05.430 13:54:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:05.430 13:54:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:05.430 13:54:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:05.430 13:54:44 -- common/autotest_common.sh@10 -- # set +x 00:12:05.430 13:54:44 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:12:05.430 [2024-04-26 13:54:45.064659] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:05.430 [2024-04-26 13:54:45.064785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61536 ] 00:12:06.004 [2024-04-26 13:54:45.442380] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.004 [2024-04-26 13:54:45.647749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.944 00:12:06.944 INFO: shutting down applications... 00:12:06.944 13:54:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:06.944 13:54:46 -- common/autotest_common.sh@850 -- # return 0 00:12:06.944 13:54:46 -- json_config/common.sh@26 -- # echo '' 00:12:06.944 13:54:46 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:12:06.944 13:54:46 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:12:06.944 13:54:46 -- json_config/common.sh@31 -- # local app=target 00:12:06.944 13:54:46 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:12:06.944 13:54:46 -- json_config/common.sh@35 -- # [[ -n 61536 ]] 00:12:06.944 13:54:46 -- json_config/common.sh@38 -- # kill -SIGINT 61536 00:12:06.944 13:54:46 -- json_config/common.sh@40 -- # (( i = 0 )) 00:12:06.944 13:54:46 -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:06.944 13:54:46 -- json_config/common.sh@41 -- # kill -0 61536 00:12:06.944 13:54:46 -- json_config/common.sh@45 -- # sleep 0.5 00:12:07.511 13:54:46 -- json_config/common.sh@40 -- # (( i++ )) 00:12:07.511 13:54:46 -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:07.511 13:54:46 -- json_config/common.sh@41 -- # kill -0 61536 00:12:07.511 13:54:46 -- json_config/common.sh@45 -- # sleep 0.5 00:12:08.080 13:54:47 -- json_config/common.sh@40 -- # (( i++ )) 00:12:08.080 13:54:47 -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:08.080 13:54:47 -- json_config/common.sh@41 -- # kill -0 61536 00:12:08.080 13:54:47 -- json_config/common.sh@45 -- # sleep 0.5 00:12:08.339 13:54:47 -- json_config/common.sh@40 -- # (( i++ )) 00:12:08.339 13:54:47 -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:08.339 13:54:47 -- json_config/common.sh@41 -- # kill -0 61536 00:12:08.339 13:54:47 -- json_config/common.sh@45 -- # sleep 0.5 00:12:08.906 13:54:48 -- json_config/common.sh@40 -- # (( i++ )) 00:12:08.906 13:54:48 -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:08.906 13:54:48 -- json_config/common.sh@41 -- # kill -0 61536 00:12:08.906 13:54:48 -- json_config/common.sh@45 -- # sleep 0.5 00:12:09.473 13:54:49 -- json_config/common.sh@40 -- # (( i++ )) 00:12:09.473 13:54:49 -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:09.473 13:54:49 -- json_config/common.sh@41 -- # kill -0 61536 00:12:09.473 13:54:49 -- json_config/common.sh@45 -- # sleep 0.5 00:12:10.040 13:54:49 -- json_config/common.sh@40 -- # (( i++ )) 00:12:10.040 13:54:49 -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:10.040 13:54:49 -- json_config/common.sh@41 -- # kill -0 61536 00:12:10.040 13:54:49 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:12:10.040 13:54:49 -- json_config/common.sh@43 -- # break 00:12:10.040 SPDK target shutdown done 00:12:10.040 Success 00:12:10.040 13:54:49 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:12:10.040 13:54:49 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:12:10.040 13:54:49 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:12:10.040 00:12:10.040 real 0m4.702s 00:12:10.040 user 0m4.205s 00:12:10.040 sys 0m0.591s 00:12:10.040 13:54:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:10.040 13:54:49 -- common/autotest_common.sh@10 -- # set +x 00:12:10.040 ************************************ 00:12:10.040 END TEST json_config_extra_key 00:12:10.040 ************************************ 00:12:10.040 13:54:49 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:10.040 13:54:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:10.040 13:54:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:10.040 13:54:49 -- common/autotest_common.sh@10 -- # set +x 00:12:10.040 ************************************ 00:12:10.040 START TEST alias_rpc 00:12:10.040 ************************************ 00:12:10.040 13:54:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:10.299 * Looking for test storage... 00:12:10.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:12:10.299 13:54:49 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:10.299 13:54:49 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61662 00:12:10.299 13:54:49 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:10.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.299 13:54:49 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61662 00:12:10.300 13:54:49 -- common/autotest_common.sh@817 -- # '[' -z 61662 ']' 00:12:10.300 13:54:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.300 13:54:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:10.300 13:54:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.300 13:54:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:10.300 13:54:49 -- common/autotest_common.sh@10 -- # set +x 00:12:10.300 [2024-04-26 13:54:49.895711] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:10.300 [2024-04-26 13:54:49.896517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61662 ] 00:12:10.558 [2024-04-26 13:54:50.069916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.817 [2024-04-26 13:54:50.316587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.786 13:54:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:11.786 13:54:51 -- common/autotest_common.sh@850 -- # return 0 00:12:11.786 13:54:51 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:12:12.045 13:54:51 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61662 00:12:12.045 13:54:51 -- common/autotest_common.sh@936 -- # '[' -z 61662 ']' 00:12:12.045 13:54:51 -- common/autotest_common.sh@940 -- # kill -0 61662 00:12:12.045 13:54:51 -- common/autotest_common.sh@941 -- # uname 00:12:12.045 13:54:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:12.045 13:54:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61662 00:12:12.045 killing process with pid 61662 00:12:12.045 13:54:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:12.045 13:54:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:12.045 13:54:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61662' 00:12:12.045 13:54:51 -- common/autotest_common.sh@955 -- # kill 61662 00:12:12.045 13:54:51 -- common/autotest_common.sh@960 -- # wait 61662 00:12:14.579 ************************************ 00:12:14.579 END TEST alias_rpc 00:12:14.579 ************************************ 00:12:14.579 00:12:14.579 real 0m4.418s 00:12:14.579 user 0m4.405s 00:12:14.579 sys 0m0.593s 00:12:14.579 13:54:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:14.579 13:54:54 -- common/autotest_common.sh@10 -- # set +x 00:12:14.579 13:54:54 -- spdk/autotest.sh@172 -- # [[ 1 -eq 0 ]] 00:12:14.579 13:54:54 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:14.579 13:54:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:14.579 13:54:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:14.579 13:54:54 -- common/autotest_common.sh@10 -- # set +x 00:12:14.579 ************************************ 00:12:14.579 START TEST dpdk_mem_utility 00:12:14.579 ************************************ 00:12:14.579 13:54:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:14.838 * Looking for test storage... 00:12:14.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:12:14.838 13:54:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:12:14.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.838 13:54:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61783 00:12:14.838 13:54:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:14.838 13:54:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61783 00:12:14.838 13:54:54 -- common/autotest_common.sh@817 -- # '[' -z 61783 ']' 00:12:14.838 13:54:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.838 13:54:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:14.838 13:54:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.838 13:54:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:14.838 13:54:54 -- common/autotest_common.sh@10 -- # set +x 00:12:14.838 [2024-04-26 13:54:54.475304] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:14.838 [2024-04-26 13:54:54.475427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61783 ] 00:12:15.098 [2024-04-26 13:54:54.646087] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.357 [2024-04-26 13:54:54.883761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.315 13:54:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:16.315 13:54:55 -- common/autotest_common.sh@850 -- # return 0 00:12:16.315 13:54:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:12:16.315 13:54:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:12:16.315 13:54:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:16.315 13:54:55 -- common/autotest_common.sh@10 -- # set +x 00:12:16.315 { 00:12:16.315 "filename": "/tmp/spdk_mem_dump.txt" 00:12:16.315 } 00:12:16.315 13:54:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:16.315 13:54:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:12:16.315 DPDK memory size 820.000000 MiB in 1 heap(s) 00:12:16.315 1 heaps totaling size 820.000000 MiB 00:12:16.315 size: 820.000000 MiB heap id: 0 00:12:16.315 end heaps---------- 00:12:16.315 8 mempools totaling size 598.116089 MiB 00:12:16.315 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:12:16.315 size: 158.602051 MiB name: PDU_data_out_Pool 00:12:16.315 size: 84.521057 MiB name: bdev_io_61783 00:12:16.315 size: 51.011292 MiB name: evtpool_61783 00:12:16.315 size: 50.003479 MiB name: msgpool_61783 00:12:16.315 size: 21.763794 MiB name: PDU_Pool 00:12:16.315 size: 19.513306 MiB name: SCSI_TASK_Pool 00:12:16.315 size: 0.026123 MiB name: Session_Pool 00:12:16.315 end mempools------- 00:12:16.315 6 memzones totaling size 4.142822 MiB 00:12:16.315 size: 1.000366 MiB name: RG_ring_0_61783 00:12:16.315 size: 1.000366 MiB name: RG_ring_1_61783 00:12:16.315 size: 1.000366 MiB name: RG_ring_4_61783 00:12:16.315 size: 1.000366 MiB name: RG_ring_5_61783 00:12:16.315 size: 0.125366 MiB name: RG_ring_2_61783 00:12:16.315 size: 0.015991 MiB name: RG_ring_3_61783 00:12:16.315 end memzones------- 00:12:16.315 13:54:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:12:16.577 heap id: 0 total size: 820.000000 MiB number of busy elements: 226 number of free elements: 18 00:12:16.577 list of free elements. size: 18.469727 MiB 00:12:16.577 element at address: 0x200000400000 with size: 1.999451 MiB 00:12:16.577 element at address: 0x200000800000 with size: 1.996887 MiB 00:12:16.577 element at address: 0x200007000000 with size: 1.995972 MiB 00:12:16.577 element at address: 0x20000b200000 with size: 1.995972 MiB 00:12:16.577 element at address: 0x200019100040 with size: 0.999939 MiB 00:12:16.577 element at address: 0x200019500040 with size: 0.999939 MiB 00:12:16.577 element at address: 0x200019600000 with size: 0.999329 MiB 00:12:16.577 element at address: 0x200003e00000 with size: 0.996094 MiB 00:12:16.577 element at address: 0x200032200000 with size: 0.994324 MiB 00:12:16.577 element at address: 0x200018e00000 with size: 0.959656 MiB 00:12:16.577 element at address: 0x200019900040 with size: 0.937256 MiB 00:12:16.577 element at address: 0x200000200000 with size: 0.834351 MiB 00:12:16.577 element at address: 0x20001b000000 with size: 0.568542 MiB 00:12:16.577 element at address: 0x200019200000 with size: 0.488708 MiB 00:12:16.577 element at address: 0x200019a00000 with size: 0.485413 MiB 00:12:16.577 element at address: 0x200013800000 with size: 0.468872 MiB 00:12:16.577 element at address: 0x200028400000 with size: 0.392883 MiB 00:12:16.577 element at address: 0x200003a00000 with size: 0.356140 MiB 00:12:16.577 list of standard malloc elements. size: 199.265869 MiB 00:12:16.577 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:12:16.577 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:12:16.577 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:12:16.577 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:12:16.577 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:12:16.577 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:12:16.577 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:12:16.577 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:12:16.577 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:12:16.577 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:12:16.577 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:12:16.577 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:12:16.577 element at address: 0x200003aff980 with size: 0.000244 MiB 00:12:16.577 element at address: 0x200003affa80 with size: 0.000244 MiB 00:12:16.577 element at address: 0x200003eff000 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:12:16.577 element at address: 0x200013878080 with size: 0.000244 MiB 00:12:16.577 element at address: 0x200013878180 with size: 0.000244 MiB 00:12:16.577 element at address: 0x200013878280 with size: 0.000244 MiB 00:12:16.577 element at address: 0x200013878380 with size: 0.000244 MiB 00:12:16.577 element at address: 0x200013878480 with size: 0.000244 MiB 00:12:16.577 element at address: 0x200013878580 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:12:16.577 element at address: 0x200019abc680 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:12:16.577 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:12:16.578 element at address: 0x200028464940 with size: 0.000244 MiB 00:12:16.578 element at address: 0x200028464a40 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846b700 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846b980 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846be80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846c080 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846c180 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846c280 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846c380 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846c480 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846c580 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846c680 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846c780 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846c880 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846c980 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846d080 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846d180 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846d280 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846d380 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846d480 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846d580 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846d680 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846d780 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846d880 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846d980 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846da80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846db80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846de80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846df80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846e080 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846e180 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846e280 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846e380 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846e480 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846e580 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846e680 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846e780 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846e880 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846e980 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846f080 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846f180 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846f280 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846f380 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846f480 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846f580 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846f680 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846f780 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846f880 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846f980 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:12:16.578 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:12:16.578 list of memzone associated elements. size: 602.264404 MiB 00:12:16.578 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:12:16.578 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:12:16.578 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:12:16.578 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:12:16.578 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:12:16.578 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61783_0 00:12:16.578 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:12:16.578 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61783_0 00:12:16.578 element at address: 0x200003fff340 with size: 48.003113 MiB 00:12:16.578 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61783_0 00:12:16.578 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:12:16.578 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:12:16.578 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:12:16.578 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:12:16.578 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:12:16.578 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61783 00:12:16.578 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:12:16.578 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61783 00:12:16.578 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:12:16.578 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61783 00:12:16.578 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:12:16.578 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:12:16.578 element at address: 0x200019abc780 with size: 1.008179 MiB 00:12:16.578 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:12:16.578 element at address: 0x200018efde00 with size: 1.008179 MiB 00:12:16.578 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:12:16.578 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:12:16.578 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:12:16.578 element at address: 0x200003eff100 with size: 1.000549 MiB 00:12:16.578 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61783 00:12:16.578 element at address: 0x200003affb80 with size: 1.000549 MiB 00:12:16.578 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61783 00:12:16.578 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:12:16.578 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61783 00:12:16.578 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:12:16.578 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61783 00:12:16.578 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:12:16.578 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61783 00:12:16.578 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:12:16.578 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:12:16.578 element at address: 0x200013878680 with size: 0.500549 MiB 00:12:16.578 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:12:16.578 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:12:16.578 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:12:16.578 element at address: 0x200003adf740 with size: 0.125549 MiB 00:12:16.578 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61783 00:12:16.578 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:12:16.578 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:12:16.578 element at address: 0x200028464b40 with size: 0.023804 MiB 00:12:16.579 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:12:16.579 element at address: 0x200003adb500 with size: 0.016174 MiB 00:12:16.579 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61783 00:12:16.579 element at address: 0x20002846acc0 with size: 0.002502 MiB 00:12:16.579 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:12:16.579 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:12:16.579 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61783 00:12:16.579 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:12:16.579 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61783 00:12:16.579 element at address: 0x20002846b800 with size: 0.000366 MiB 00:12:16.579 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:12:16.579 13:54:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:12:16.579 13:54:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61783 00:12:16.579 13:54:55 -- common/autotest_common.sh@936 -- # '[' -z 61783 ']' 00:12:16.579 13:54:55 -- common/autotest_common.sh@940 -- # kill -0 61783 00:12:16.579 13:54:55 -- common/autotest_common.sh@941 -- # uname 00:12:16.579 13:54:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:16.579 13:54:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61783 00:12:16.579 killing process with pid 61783 00:12:16.579 13:54:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:16.579 13:54:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:16.579 13:54:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61783' 00:12:16.579 13:54:56 -- common/autotest_common.sh@955 -- # kill 61783 00:12:16.579 13:54:56 -- common/autotest_common.sh@960 -- # wait 61783 00:12:19.168 ************************************ 00:12:19.168 END TEST dpdk_mem_utility 00:12:19.168 ************************************ 00:12:19.168 00:12:19.168 real 0m4.217s 00:12:19.168 user 0m4.117s 00:12:19.168 sys 0m0.568s 00:12:19.168 13:54:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:19.168 13:54:58 -- common/autotest_common.sh@10 -- # set +x 00:12:19.168 13:54:58 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:12:19.168 13:54:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:19.168 13:54:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.168 13:54:58 -- common/autotest_common.sh@10 -- # set +x 00:12:19.168 ************************************ 00:12:19.168 START TEST event 00:12:19.168 ************************************ 00:12:19.168 13:54:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:12:19.168 * Looking for test storage... 00:12:19.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:19.168 13:54:58 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:19.168 13:54:58 -- bdev/nbd_common.sh@6 -- # set -e 00:12:19.168 13:54:58 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:19.168 13:54:58 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:12:19.168 13:54:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.168 13:54:58 -- common/autotest_common.sh@10 -- # set +x 00:12:19.168 ************************************ 00:12:19.168 START TEST event_perf 00:12:19.168 ************************************ 00:12:19.168 13:54:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:19.168 Running I/O for 1 seconds...[2024-04-26 13:54:58.809256] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:19.168 [2024-04-26 13:54:58.809494] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61910 ] 00:12:19.427 [2024-04-26 13:54:58.983813] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.686 [2024-04-26 13:54:59.237634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.686 [2024-04-26 13:54:59.237814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.686 [2024-04-26 13:54:59.237973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.686 [2024-04-26 13:54:59.238007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.065 Running I/O for 1 seconds... 00:12:21.065 lcore 0: 185859 00:12:21.065 lcore 1: 185858 00:12:21.065 lcore 2: 185858 00:12:21.065 lcore 3: 185858 00:12:21.065 done. 00:12:21.065 00:12:21.065 real 0m1.891s 00:12:21.065 user 0m4.635s 00:12:21.065 sys 0m0.126s 00:12:21.065 ************************************ 00:12:21.065 END TEST event_perf 00:12:21.065 ************************************ 00:12:21.065 13:55:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:21.065 13:55:00 -- common/autotest_common.sh@10 -- # set +x 00:12:21.065 13:55:00 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:12:21.065 13:55:00 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:12:21.065 13:55:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:21.065 13:55:00 -- common/autotest_common.sh@10 -- # set +x 00:12:21.324 ************************************ 00:12:21.324 START TEST event_reactor 00:12:21.324 ************************************ 00:12:21.324 13:55:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:12:21.324 [2024-04-26 13:55:00.856082] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:21.324 [2024-04-26 13:55:00.856224] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61960 ] 00:12:21.582 [2024-04-26 13:55:01.022402] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.841 [2024-04-26 13:55:01.264032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.215 test_start 00:12:23.215 oneshot 00:12:23.215 tick 100 00:12:23.215 tick 100 00:12:23.215 tick 250 00:12:23.215 tick 100 00:12:23.215 tick 100 00:12:23.215 tick 100 00:12:23.215 tick 250 00:12:23.215 tick 500 00:12:23.215 tick 100 00:12:23.215 tick 100 00:12:23.215 tick 250 00:12:23.215 tick 100 00:12:23.215 tick 100 00:12:23.215 test_end 00:12:23.216 00:12:23.216 real 0m1.865s 00:12:23.216 user 0m1.647s 00:12:23.216 sys 0m0.108s 00:12:23.216 13:55:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:23.216 13:55:02 -- common/autotest_common.sh@10 -- # set +x 00:12:23.216 ************************************ 00:12:23.216 END TEST event_reactor 00:12:23.216 ************************************ 00:12:23.216 13:55:02 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:23.216 13:55:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:12:23.216 13:55:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:23.216 13:55:02 -- common/autotest_common.sh@10 -- # set +x 00:12:23.216 ************************************ 00:12:23.216 START TEST event_reactor_perf 00:12:23.216 ************************************ 00:12:23.216 13:55:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:23.216 [2024-04-26 13:55:02.846170] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:23.216 [2024-04-26 13:55:02.846324] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62006 ] 00:12:23.474 [2024-04-26 13:55:03.027102] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.732 [2024-04-26 13:55:03.267380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.106 test_start 00:12:25.106 test_end 00:12:25.106 Performance: 344372 events per second 00:12:25.106 00:12:25.106 real 0m1.860s 00:12:25.106 user 0m1.635s 00:12:25.106 sys 0m0.115s 00:12:25.106 ************************************ 00:12:25.106 END TEST event_reactor_perf 00:12:25.106 ************************************ 00:12:25.106 13:55:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:25.106 13:55:04 -- common/autotest_common.sh@10 -- # set +x 00:12:25.106 13:55:04 -- event/event.sh@49 -- # uname -s 00:12:25.106 13:55:04 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:12:25.106 13:55:04 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:12:25.106 13:55:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:25.106 13:55:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:25.106 13:55:04 -- common/autotest_common.sh@10 -- # set +x 00:12:25.365 ************************************ 00:12:25.365 START TEST event_scheduler 00:12:25.365 ************************************ 00:12:25.365 13:55:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:12:25.365 * Looking for test storage... 00:12:25.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:12:25.365 13:55:04 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:12:25.365 13:55:04 -- scheduler/scheduler.sh@35 -- # scheduler_pid=62080 00:12:25.365 13:55:04 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:12:25.365 13:55:04 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:12:25.365 13:55:04 -- scheduler/scheduler.sh@37 -- # waitforlisten 62080 00:12:25.365 13:55:04 -- common/autotest_common.sh@817 -- # '[' -z 62080 ']' 00:12:25.365 13:55:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.365 13:55:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:25.365 13:55:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.365 13:55:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:25.365 13:55:04 -- common/autotest_common.sh@10 -- # set +x 00:12:25.624 [2024-04-26 13:55:05.049754] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:25.624 [2024-04-26 13:55:05.050616] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62080 ] 00:12:25.624 [2024-04-26 13:55:05.221888] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.882 [2024-04-26 13:55:05.466037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.882 [2024-04-26 13:55:05.466254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.882 [2024-04-26 13:55:05.466267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.882 [2024-04-26 13:55:05.466267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.449 13:55:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:26.449 13:55:05 -- common/autotest_common.sh@850 -- # return 0 00:12:26.449 13:55:05 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:12:26.449 13:55:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.449 13:55:05 -- common/autotest_common.sh@10 -- # set +x 00:12:26.449 POWER: Env isn't set yet! 00:12:26.449 POWER: Attempting to initialise ACPI cpufreq power management... 00:12:26.449 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:26.449 POWER: Cannot set governor of lcore 0 to userspace 00:12:26.449 POWER: Attempting to initialise PSTAT power management... 00:12:26.449 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:26.449 POWER: Cannot set governor of lcore 0 to performance 00:12:26.449 POWER: Attempting to initialise AMD PSTATE power management... 00:12:26.449 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:26.449 POWER: Cannot set governor of lcore 0 to userspace 00:12:26.449 POWER: Attempting to initialise CPPC power management... 00:12:26.449 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:26.449 POWER: Cannot set governor of lcore 0 to userspace 00:12:26.449 POWER: Attempting to initialise VM power management... 00:12:26.449 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:12:26.449 POWER: Unable to set Power Management Environment for lcore 0 00:12:26.449 [2024-04-26 13:55:05.957259] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:12:26.449 [2024-04-26 13:55:05.957352] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:12:26.449 [2024-04-26 13:55:05.957394] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:12:26.449 13:55:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.449 13:55:05 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:12:26.449 13:55:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.449 13:55:05 -- common/autotest_common.sh@10 -- # set +x 00:12:26.708 [2024-04-26 13:55:06.348355] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:12:26.708 13:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.708 13:55:06 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:12:26.708 13:55:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:26.708 13:55:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:26.708 13:55:06 -- common/autotest_common.sh@10 -- # set +x 00:12:26.969 ************************************ 00:12:26.969 START TEST scheduler_create_thread 00:12:26.969 ************************************ 00:12:26.969 13:55:06 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:12:26.969 13:55:06 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:12:26.969 13:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.969 13:55:06 -- common/autotest_common.sh@10 -- # set +x 00:12:26.969 2 00:12:26.969 13:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.969 13:55:06 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:12:26.969 13:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.969 13:55:06 -- common/autotest_common.sh@10 -- # set +x 00:12:26.969 3 00:12:26.969 13:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.969 13:55:06 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:12:26.969 13:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.969 13:55:06 -- common/autotest_common.sh@10 -- # set +x 00:12:26.969 4 00:12:26.969 13:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.969 13:55:06 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:12:26.969 13:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.969 13:55:06 -- common/autotest_common.sh@10 -- # set +x 00:12:26.969 5 00:12:26.969 13:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.969 13:55:06 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:12:26.969 13:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.969 13:55:06 -- common/autotest_common.sh@10 -- # set +x 00:12:26.969 6 00:12:26.969 13:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.969 13:55:06 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:12:26.969 13:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.969 13:55:06 -- common/autotest_common.sh@10 -- # set +x 00:12:26.969 7 00:12:26.969 13:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.969 13:55:06 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:12:26.969 13:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.969 13:55:06 -- common/autotest_common.sh@10 -- # set +x 00:12:26.969 8 00:12:26.969 13:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.969 13:55:06 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:12:26.969 13:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.969 13:55:06 -- common/autotest_common.sh@10 -- # set +x 00:12:26.969 9 00:12:26.969 13:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.969 13:55:06 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:12:26.969 13:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.969 13:55:06 -- common/autotest_common.sh@10 -- # set +x 00:12:26.969 10 00:12:26.969 13:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.969 13:55:06 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:12:26.969 13:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.969 13:55:06 -- common/autotest_common.sh@10 -- # set +x 00:12:26.969 13:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.969 13:55:06 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:12:26.969 13:55:06 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:12:26.969 13:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.969 13:55:06 -- common/autotest_common.sh@10 -- # set +x 00:12:26.969 13:55:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.969 13:55:06 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:12:26.969 13:55:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.969 13:55:06 -- common/autotest_common.sh@10 -- # set +x 00:12:27.908 13:55:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.908 13:55:07 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:12:27.908 13:55:07 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:12:27.908 13:55:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.908 13:55:07 -- common/autotest_common.sh@10 -- # set +x 00:12:29.284 ************************************ 00:12:29.284 END TEST scheduler_create_thread 00:12:29.284 ************************************ 00:12:29.284 13:55:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:29.284 00:12:29.284 real 0m2.133s 00:12:29.284 user 0m0.022s 00:12:29.284 sys 0m0.010s 00:12:29.284 13:55:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:29.284 13:55:08 -- common/autotest_common.sh@10 -- # set +x 00:12:29.284 13:55:08 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:29.284 13:55:08 -- scheduler/scheduler.sh@46 -- # killprocess 62080 00:12:29.284 13:55:08 -- common/autotest_common.sh@936 -- # '[' -z 62080 ']' 00:12:29.284 13:55:08 -- common/autotest_common.sh@940 -- # kill -0 62080 00:12:29.284 13:55:08 -- common/autotest_common.sh@941 -- # uname 00:12:29.284 13:55:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:29.284 13:55:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62080 00:12:29.284 killing process with pid 62080 00:12:29.284 13:55:08 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:29.284 13:55:08 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:29.284 13:55:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62080' 00:12:29.284 13:55:08 -- common/autotest_common.sh@955 -- # kill 62080 00:12:29.284 13:55:08 -- common/autotest_common.sh@960 -- # wait 62080 00:12:29.543 [2024-04-26 13:55:09.049658] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:12:30.920 00:12:30.920 real 0m5.573s 00:12:30.920 user 0m9.493s 00:12:30.920 sys 0m0.608s 00:12:30.920 13:55:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:30.920 ************************************ 00:12:30.920 END TEST event_scheduler 00:12:30.920 ************************************ 00:12:30.920 13:55:10 -- common/autotest_common.sh@10 -- # set +x 00:12:30.920 13:55:10 -- event/event.sh@51 -- # modprobe -n nbd 00:12:30.920 13:55:10 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:12:30.920 13:55:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:30.920 13:55:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:30.920 13:55:10 -- common/autotest_common.sh@10 -- # set +x 00:12:30.920 ************************************ 00:12:30.920 START TEST app_repeat 00:12:30.920 ************************************ 00:12:30.920 13:55:10 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:12:30.920 13:55:10 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:30.920 13:55:10 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:30.920 13:55:10 -- event/event.sh@13 -- # local nbd_list 00:12:30.920 13:55:10 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:30.920 13:55:10 -- event/event.sh@14 -- # local bdev_list 00:12:30.920 13:55:10 -- event/event.sh@15 -- # local repeat_times=4 00:12:30.920 13:55:10 -- event/event.sh@17 -- # modprobe nbd 00:12:30.920 13:55:10 -- event/event.sh@19 -- # repeat_pid=62223 00:12:30.920 Process app_repeat pid: 62223 00:12:30.920 spdk_app_start Round 0 00:12:30.920 13:55:10 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:12:30.920 13:55:10 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62223' 00:12:30.920 13:55:10 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:12:30.920 13:55:10 -- event/event.sh@23 -- # for i in {0..2} 00:12:30.920 13:55:10 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:12:30.920 13:55:10 -- event/event.sh@25 -- # waitforlisten 62223 /var/tmp/spdk-nbd.sock 00:12:30.920 13:55:10 -- common/autotest_common.sh@817 -- # '[' -z 62223 ']' 00:12:30.920 13:55:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:30.920 13:55:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:30.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:30.920 13:55:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:30.920 13:55:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:30.920 13:55:10 -- common/autotest_common.sh@10 -- # set +x 00:12:31.179 [2024-04-26 13:55:10.647695] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:31.179 [2024-04-26 13:55:10.647852] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62223 ] 00:12:31.179 [2024-04-26 13:55:10.827118] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:31.439 [2024-04-26 13:55:11.076421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.439 [2024-04-26 13:55:11.076447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.005 13:55:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:32.005 13:55:11 -- common/autotest_common.sh@850 -- # return 0 00:12:32.005 13:55:11 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:32.263 Malloc0 00:12:32.263 13:55:11 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:32.522 Malloc1 00:12:32.522 13:55:12 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:32.523 13:55:12 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:32.523 13:55:12 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:32.523 13:55:12 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:32.523 13:55:12 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:32.523 13:55:12 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:32.523 13:55:12 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:32.523 13:55:12 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:32.523 13:55:12 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:32.523 13:55:12 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:32.523 13:55:12 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:32.523 13:55:12 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:32.523 13:55:12 -- bdev/nbd_common.sh@12 -- # local i 00:12:32.523 13:55:12 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:32.523 13:55:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:32.523 13:55:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:32.782 /dev/nbd0 00:12:32.782 13:55:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:32.782 13:55:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:32.782 13:55:12 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:12:32.782 13:55:12 -- common/autotest_common.sh@855 -- # local i 00:12:32.782 13:55:12 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:32.782 13:55:12 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:32.782 13:55:12 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:12:32.782 13:55:12 -- common/autotest_common.sh@859 -- # break 00:12:32.782 13:55:12 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:32.782 13:55:12 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:32.782 13:55:12 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:32.782 1+0 records in 00:12:32.782 1+0 records out 00:12:32.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290423 s, 14.1 MB/s 00:12:32.782 13:55:12 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:32.782 13:55:12 -- common/autotest_common.sh@872 -- # size=4096 00:12:32.782 13:55:12 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:32.782 13:55:12 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:32.782 13:55:12 -- common/autotest_common.sh@875 -- # return 0 00:12:32.782 13:55:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:32.782 13:55:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:32.782 13:55:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:33.041 /dev/nbd1 00:12:33.041 13:55:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:33.041 13:55:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:33.041 13:55:12 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:12:33.041 13:55:12 -- common/autotest_common.sh@855 -- # local i 00:12:33.041 13:55:12 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:33.041 13:55:12 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:33.041 13:55:12 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:12:33.041 13:55:12 -- common/autotest_common.sh@859 -- # break 00:12:33.041 13:55:12 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:33.041 13:55:12 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:33.041 13:55:12 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:33.041 1+0 records in 00:12:33.041 1+0 records out 00:12:33.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312177 s, 13.1 MB/s 00:12:33.041 13:55:12 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:33.041 13:55:12 -- common/autotest_common.sh@872 -- # size=4096 00:12:33.041 13:55:12 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:33.041 13:55:12 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:33.041 13:55:12 -- common/autotest_common.sh@875 -- # return 0 00:12:33.041 13:55:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:33.041 13:55:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:33.041 13:55:12 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:33.041 13:55:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:33.041 13:55:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:33.300 { 00:12:33.300 "bdev_name": "Malloc0", 00:12:33.300 "nbd_device": "/dev/nbd0" 00:12:33.300 }, 00:12:33.300 { 00:12:33.300 "bdev_name": "Malloc1", 00:12:33.300 "nbd_device": "/dev/nbd1" 00:12:33.300 } 00:12:33.300 ]' 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:33.300 { 00:12:33.300 "bdev_name": "Malloc0", 00:12:33.300 "nbd_device": "/dev/nbd0" 00:12:33.300 }, 00:12:33.300 { 00:12:33.300 "bdev_name": "Malloc1", 00:12:33.300 "nbd_device": "/dev/nbd1" 00:12:33.300 } 00:12:33.300 ]' 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:33.300 /dev/nbd1' 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:33.300 /dev/nbd1' 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@65 -- # count=2 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@66 -- # echo 2 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@95 -- # count=2 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:33.300 256+0 records in 00:12:33.300 256+0 records out 00:12:33.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123747 s, 84.7 MB/s 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:33.300 256+0 records in 00:12:33.300 256+0 records out 00:12:33.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02852 s, 36.8 MB/s 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:33.300 256+0 records in 00:12:33.300 256+0 records out 00:12:33.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0359473 s, 29.2 MB/s 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:33.300 13:55:12 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:33.301 13:55:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:33.301 13:55:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:33.301 13:55:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:33.301 13:55:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:33.301 13:55:12 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:33.301 13:55:12 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:33.301 13:55:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:33.301 13:55:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:33.301 13:55:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:33.301 13:55:12 -- bdev/nbd_common.sh@51 -- # local i 00:12:33.301 13:55:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.301 13:55:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:33.560 13:55:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:33.560 13:55:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:33.560 13:55:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:33.560 13:55:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.560 13:55:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.560 13:55:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:33.560 13:55:13 -- bdev/nbd_common.sh@41 -- # break 00:12:33.560 13:55:13 -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.560 13:55:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:33.560 13:55:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:33.818 13:55:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:33.818 13:55:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:33.818 13:55:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:33.818 13:55:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:33.818 13:55:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:33.818 13:55:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:33.818 13:55:13 -- bdev/nbd_common.sh@41 -- # break 00:12:33.818 13:55:13 -- bdev/nbd_common.sh@45 -- # return 0 00:12:33.818 13:55:13 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:33.818 13:55:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:33.818 13:55:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:34.088 13:55:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:34.088 13:55:13 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:34.088 13:55:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:34.089 13:55:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:34.089 13:55:13 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:34.089 13:55:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:34.089 13:55:13 -- bdev/nbd_common.sh@65 -- # true 00:12:34.089 13:55:13 -- bdev/nbd_common.sh@65 -- # count=0 00:12:34.089 13:55:13 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:34.089 13:55:13 -- bdev/nbd_common.sh@104 -- # count=0 00:12:34.089 13:55:13 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:34.089 13:55:13 -- bdev/nbd_common.sh@109 -- # return 0 00:12:34.089 13:55:13 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:34.660 13:55:14 -- event/event.sh@35 -- # sleep 3 00:12:36.037 [2024-04-26 13:55:15.386259] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:36.037 [2024-04-26 13:55:15.616563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.037 [2024-04-26 13:55:15.616566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.296 [2024-04-26 13:55:15.860550] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:36.296 [2024-04-26 13:55:15.860635] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:37.674 13:55:17 -- event/event.sh@23 -- # for i in {0..2} 00:12:37.675 spdk_app_start Round 1 00:12:37.675 13:55:17 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:12:37.675 13:55:17 -- event/event.sh@25 -- # waitforlisten 62223 /var/tmp/spdk-nbd.sock 00:12:37.675 13:55:17 -- common/autotest_common.sh@817 -- # '[' -z 62223 ']' 00:12:37.675 13:55:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:37.675 13:55:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:37.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:37.675 13:55:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:37.675 13:55:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:37.675 13:55:17 -- common/autotest_common.sh@10 -- # set +x 00:12:37.675 13:55:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:37.675 13:55:17 -- common/autotest_common.sh@850 -- # return 0 00:12:37.675 13:55:17 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:37.933 Malloc0 00:12:37.933 13:55:17 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:38.213 Malloc1 00:12:38.213 13:55:17 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:38.213 13:55:17 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:38.213 13:55:17 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:38.213 13:55:17 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:38.213 13:55:17 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:38.213 13:55:17 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:38.213 13:55:17 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:38.213 13:55:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:38.213 13:55:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:38.213 13:55:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:38.213 13:55:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:38.213 13:55:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:38.213 13:55:17 -- bdev/nbd_common.sh@12 -- # local i 00:12:38.213 13:55:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:38.213 13:55:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:38.213 13:55:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:38.473 /dev/nbd0 00:12:38.473 13:55:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:38.473 13:55:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:38.473 13:55:18 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:12:38.473 13:55:18 -- common/autotest_common.sh@855 -- # local i 00:12:38.473 13:55:18 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:38.473 13:55:18 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:38.473 13:55:18 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:12:38.473 13:55:18 -- common/autotest_common.sh@859 -- # break 00:12:38.473 13:55:18 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:38.473 13:55:18 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:38.473 13:55:18 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:38.473 1+0 records in 00:12:38.473 1+0 records out 00:12:38.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483544 s, 8.5 MB/s 00:12:38.473 13:55:18 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:38.473 13:55:18 -- common/autotest_common.sh@872 -- # size=4096 00:12:38.473 13:55:18 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:38.473 13:55:18 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:38.473 13:55:18 -- common/autotest_common.sh@875 -- # return 0 00:12:38.473 13:55:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:38.473 13:55:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:38.473 13:55:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:38.732 /dev/nbd1 00:12:38.732 13:55:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:38.732 13:55:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:38.732 13:55:18 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:12:38.732 13:55:18 -- common/autotest_common.sh@855 -- # local i 00:12:38.732 13:55:18 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:38.732 13:55:18 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:38.732 13:55:18 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:12:38.732 13:55:18 -- common/autotest_common.sh@859 -- # break 00:12:38.732 13:55:18 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:38.732 13:55:18 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:38.732 13:55:18 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:38.732 1+0 records in 00:12:38.732 1+0 records out 00:12:38.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176955 s, 23.1 MB/s 00:12:38.732 13:55:18 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:38.732 13:55:18 -- common/autotest_common.sh@872 -- # size=4096 00:12:38.732 13:55:18 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:38.732 13:55:18 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:38.732 13:55:18 -- common/autotest_common.sh@875 -- # return 0 00:12:38.732 13:55:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:38.732 13:55:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:38.732 13:55:18 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:38.732 13:55:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:38.732 13:55:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:38.992 { 00:12:38.992 "bdev_name": "Malloc0", 00:12:38.992 "nbd_device": "/dev/nbd0" 00:12:38.992 }, 00:12:38.992 { 00:12:38.992 "bdev_name": "Malloc1", 00:12:38.992 "nbd_device": "/dev/nbd1" 00:12:38.992 } 00:12:38.992 ]' 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:38.992 { 00:12:38.992 "bdev_name": "Malloc0", 00:12:38.992 "nbd_device": "/dev/nbd0" 00:12:38.992 }, 00:12:38.992 { 00:12:38.992 "bdev_name": "Malloc1", 00:12:38.992 "nbd_device": "/dev/nbd1" 00:12:38.992 } 00:12:38.992 ]' 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:38.992 /dev/nbd1' 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:38.992 /dev/nbd1' 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@65 -- # count=2 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@66 -- # echo 2 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@95 -- # count=2 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:38.992 256+0 records in 00:12:38.992 256+0 records out 00:12:38.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00561903 s, 187 MB/s 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:38.992 256+0 records in 00:12:38.992 256+0 records out 00:12:38.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275639 s, 38.0 MB/s 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:38.992 256+0 records in 00:12:38.992 256+0 records out 00:12:38.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0383492 s, 27.3 MB/s 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:38.992 13:55:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:39.252 13:55:18 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:39.252 13:55:18 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:39.252 13:55:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:39.252 13:55:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:39.252 13:55:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:39.252 13:55:18 -- bdev/nbd_common.sh@51 -- # local i 00:12:39.252 13:55:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.252 13:55:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:39.252 13:55:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:39.252 13:55:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:39.252 13:55:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:39.252 13:55:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.252 13:55:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.252 13:55:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:39.252 13:55:18 -- bdev/nbd_common.sh@41 -- # break 00:12:39.252 13:55:18 -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.252 13:55:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.252 13:55:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:39.821 13:55:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:39.821 13:55:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:39.821 13:55:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:39.821 13:55:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:39.821 13:55:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:39.821 13:55:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:39.821 13:55:19 -- bdev/nbd_common.sh@41 -- # break 00:12:39.821 13:55:19 -- bdev/nbd_common.sh@45 -- # return 0 00:12:39.821 13:55:19 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:39.821 13:55:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:39.821 13:55:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:39.821 13:55:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:39.821 13:55:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:39.821 13:55:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:40.079 13:55:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:40.079 13:55:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:40.079 13:55:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:40.079 13:55:19 -- bdev/nbd_common.sh@65 -- # true 00:12:40.079 13:55:19 -- bdev/nbd_common.sh@65 -- # count=0 00:12:40.079 13:55:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:40.079 13:55:19 -- bdev/nbd_common.sh@104 -- # count=0 00:12:40.079 13:55:19 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:40.079 13:55:19 -- bdev/nbd_common.sh@109 -- # return 0 00:12:40.079 13:55:19 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:40.337 13:55:19 -- event/event.sh@35 -- # sleep 3 00:12:41.713 [2024-04-26 13:55:21.241735] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:41.971 [2024-04-26 13:55:21.480561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.971 [2024-04-26 13:55:21.480580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.267 [2024-04-26 13:55:21.720479] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:42.267 [2024-04-26 13:55:21.720554] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:43.671 spdk_app_start Round 2 00:12:43.671 13:55:22 -- event/event.sh@23 -- # for i in {0..2} 00:12:43.671 13:55:22 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:12:43.671 13:55:22 -- event/event.sh@25 -- # waitforlisten 62223 /var/tmp/spdk-nbd.sock 00:12:43.671 13:55:22 -- common/autotest_common.sh@817 -- # '[' -z 62223 ']' 00:12:43.671 13:55:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:43.671 13:55:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:43.671 13:55:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:43.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:43.671 13:55:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:43.671 13:55:22 -- common/autotest_common.sh@10 -- # set +x 00:12:43.671 13:55:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:43.671 13:55:23 -- common/autotest_common.sh@850 -- # return 0 00:12:43.671 13:55:23 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:43.930 Malloc0 00:12:43.930 13:55:23 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:44.189 Malloc1 00:12:44.189 13:55:23 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:44.189 13:55:23 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.189 13:55:23 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:44.189 13:55:23 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:44.189 13:55:23 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:44.189 13:55:23 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:44.189 13:55:23 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:44.189 13:55:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.189 13:55:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:44.189 13:55:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:44.189 13:55:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:44.189 13:55:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:44.189 13:55:23 -- bdev/nbd_common.sh@12 -- # local i 00:12:44.189 13:55:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:44.189 13:55:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:44.189 13:55:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:44.448 /dev/nbd0 00:12:44.448 13:55:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:44.448 13:55:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:44.448 13:55:23 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:12:44.448 13:55:23 -- common/autotest_common.sh@855 -- # local i 00:12:44.448 13:55:23 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:44.448 13:55:23 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:44.448 13:55:23 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:12:44.448 13:55:23 -- common/autotest_common.sh@859 -- # break 00:12:44.448 13:55:23 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:44.448 13:55:23 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:44.448 13:55:23 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:44.448 1+0 records in 00:12:44.448 1+0 records out 00:12:44.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366993 s, 11.2 MB/s 00:12:44.448 13:55:23 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:44.448 13:55:23 -- common/autotest_common.sh@872 -- # size=4096 00:12:44.448 13:55:23 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:44.448 13:55:23 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:44.448 13:55:23 -- common/autotest_common.sh@875 -- # return 0 00:12:44.448 13:55:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.448 13:55:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:44.448 13:55:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:44.708 /dev/nbd1 00:12:44.708 13:55:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:44.708 13:55:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:44.708 13:55:24 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:12:44.708 13:55:24 -- common/autotest_common.sh@855 -- # local i 00:12:44.708 13:55:24 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:12:44.708 13:55:24 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:12:44.708 13:55:24 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:12:44.708 13:55:24 -- common/autotest_common.sh@859 -- # break 00:12:44.708 13:55:24 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:12:44.708 13:55:24 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:12:44.708 13:55:24 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:44.708 1+0 records in 00:12:44.708 1+0 records out 00:12:44.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352222 s, 11.6 MB/s 00:12:44.708 13:55:24 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:44.708 13:55:24 -- common/autotest_common.sh@872 -- # size=4096 00:12:44.708 13:55:24 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:44.708 13:55:24 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:12:44.708 13:55:24 -- common/autotest_common.sh@875 -- # return 0 00:12:44.708 13:55:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:44.708 13:55:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:44.708 13:55:24 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:44.708 13:55:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.708 13:55:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:44.968 { 00:12:44.968 "bdev_name": "Malloc0", 00:12:44.968 "nbd_device": "/dev/nbd0" 00:12:44.968 }, 00:12:44.968 { 00:12:44.968 "bdev_name": "Malloc1", 00:12:44.968 "nbd_device": "/dev/nbd1" 00:12:44.968 } 00:12:44.968 ]' 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:44.968 { 00:12:44.968 "bdev_name": "Malloc0", 00:12:44.968 "nbd_device": "/dev/nbd0" 00:12:44.968 }, 00:12:44.968 { 00:12:44.968 "bdev_name": "Malloc1", 00:12:44.968 "nbd_device": "/dev/nbd1" 00:12:44.968 } 00:12:44.968 ]' 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:44.968 /dev/nbd1' 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:44.968 /dev/nbd1' 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@65 -- # count=2 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@66 -- # echo 2 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@95 -- # count=2 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:44.968 256+0 records in 00:12:44.968 256+0 records out 00:12:44.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125666 s, 83.4 MB/s 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:44.968 256+0 records in 00:12:44.968 256+0 records out 00:12:44.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271341 s, 38.6 MB/s 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:44.968 256+0 records in 00:12:44.968 256+0 records out 00:12:44.968 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.034646 s, 30.3 MB/s 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@51 -- # local i 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.968 13:55:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:45.227 13:55:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:45.227 13:55:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:45.227 13:55:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:45.227 13:55:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.227 13:55:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.227 13:55:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:45.227 13:55:24 -- bdev/nbd_common.sh@41 -- # break 00:12:45.228 13:55:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.228 13:55:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:45.228 13:55:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:45.487 13:55:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:45.487 13:55:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:45.487 13:55:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:45.487 13:55:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.487 13:55:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.487 13:55:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:45.487 13:55:25 -- bdev/nbd_common.sh@41 -- # break 00:12:45.487 13:55:25 -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.487 13:55:25 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:45.487 13:55:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:45.487 13:55:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:45.748 13:55:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:45.748 13:55:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:45.748 13:55:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:45.748 13:55:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:45.748 13:55:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:45.748 13:55:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:45.748 13:55:25 -- bdev/nbd_common.sh@65 -- # true 00:12:45.748 13:55:25 -- bdev/nbd_common.sh@65 -- # count=0 00:12:45.748 13:55:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:45.748 13:55:25 -- bdev/nbd_common.sh@104 -- # count=0 00:12:45.748 13:55:25 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:45.748 13:55:25 -- bdev/nbd_common.sh@109 -- # return 0 00:12:45.748 13:55:25 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:46.324 13:55:25 -- event/event.sh@35 -- # sleep 3 00:12:47.704 [2024-04-26 13:55:27.157272] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:47.963 [2024-04-26 13:55:27.393104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.963 [2024-04-26 13:55:27.393106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.223 [2024-04-26 13:55:27.638871] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:48.223 [2024-04-26 13:55:27.638946] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:49.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:49.159 13:55:28 -- event/event.sh@38 -- # waitforlisten 62223 /var/tmp/spdk-nbd.sock 00:12:49.159 13:55:28 -- common/autotest_common.sh@817 -- # '[' -z 62223 ']' 00:12:49.159 13:55:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:49.159 13:55:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:49.159 13:55:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:49.159 13:55:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:49.159 13:55:28 -- common/autotest_common.sh@10 -- # set +x 00:12:49.452 13:55:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:49.452 13:55:28 -- common/autotest_common.sh@850 -- # return 0 00:12:49.452 13:55:28 -- event/event.sh@39 -- # killprocess 62223 00:12:49.452 13:55:28 -- common/autotest_common.sh@936 -- # '[' -z 62223 ']' 00:12:49.452 13:55:28 -- common/autotest_common.sh@940 -- # kill -0 62223 00:12:49.452 13:55:28 -- common/autotest_common.sh@941 -- # uname 00:12:49.452 13:55:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:49.452 13:55:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62223 00:12:49.452 killing process with pid 62223 00:12:49.452 13:55:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:49.452 13:55:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:49.452 13:55:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62223' 00:12:49.452 13:55:29 -- common/autotest_common.sh@955 -- # kill 62223 00:12:49.452 13:55:29 -- common/autotest_common.sh@960 -- # wait 62223 00:12:50.876 spdk_app_start is called in Round 0. 00:12:50.876 Shutdown signal received, stop current app iteration 00:12:50.876 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 reinitialization... 00:12:50.876 spdk_app_start is called in Round 1. 00:12:50.876 Shutdown signal received, stop current app iteration 00:12:50.876 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 reinitialization... 00:12:50.876 spdk_app_start is called in Round 2. 00:12:50.876 Shutdown signal received, stop current app iteration 00:12:50.876 Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 reinitialization... 00:12:50.876 spdk_app_start is called in Round 3. 00:12:50.876 Shutdown signal received, stop current app iteration 00:12:50.876 ************************************ 00:12:50.876 END TEST app_repeat 00:12:50.876 ************************************ 00:12:50.876 13:55:30 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:12:50.876 13:55:30 -- event/event.sh@42 -- # return 0 00:12:50.876 00:12:50.876 real 0m19.678s 00:12:50.876 user 0m40.418s 00:12:50.876 sys 0m3.235s 00:12:50.876 13:55:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:50.876 13:55:30 -- common/autotest_common.sh@10 -- # set +x 00:12:50.876 13:55:30 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:12:50.876 13:55:30 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:50.876 13:55:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:50.876 13:55:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:50.876 13:55:30 -- common/autotest_common.sh@10 -- # set +x 00:12:50.876 ************************************ 00:12:50.876 START TEST cpu_locks 00:12:50.876 ************************************ 00:12:50.876 13:55:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:50.876 * Looking for test storage... 00:12:51.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:51.134 13:55:30 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:12:51.134 13:55:30 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:12:51.135 13:55:30 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:12:51.135 13:55:30 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:12:51.135 13:55:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:51.135 13:55:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:51.135 13:55:30 -- common/autotest_common.sh@10 -- # set +x 00:12:51.135 ************************************ 00:12:51.135 START TEST default_locks 00:12:51.135 ************************************ 00:12:51.135 13:55:30 -- common/autotest_common.sh@1111 -- # default_locks 00:12:51.135 13:55:30 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62872 00:12:51.135 13:55:30 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:51.135 13:55:30 -- event/cpu_locks.sh@47 -- # waitforlisten 62872 00:12:51.135 13:55:30 -- common/autotest_common.sh@817 -- # '[' -z 62872 ']' 00:12:51.135 13:55:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.135 13:55:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:51.135 13:55:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.135 13:55:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:51.135 13:55:30 -- common/autotest_common.sh@10 -- # set +x 00:12:51.135 [2024-04-26 13:55:30.769748] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:51.135 [2024-04-26 13:55:30.769868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62872 ] 00:12:51.393 [2024-04-26 13:55:30.941338] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.652 [2024-04-26 13:55:31.177562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.589 13:55:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:52.589 13:55:32 -- common/autotest_common.sh@850 -- # return 0 00:12:52.589 13:55:32 -- event/cpu_locks.sh@49 -- # locks_exist 62872 00:12:52.589 13:55:32 -- event/cpu_locks.sh@22 -- # lslocks -p 62872 00:12:52.589 13:55:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:53.157 13:55:32 -- event/cpu_locks.sh@50 -- # killprocess 62872 00:12:53.157 13:55:32 -- common/autotest_common.sh@936 -- # '[' -z 62872 ']' 00:12:53.157 13:55:32 -- common/autotest_common.sh@940 -- # kill -0 62872 00:12:53.157 13:55:32 -- common/autotest_common.sh@941 -- # uname 00:12:53.157 13:55:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:53.157 13:55:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62872 00:12:53.157 13:55:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:53.157 13:55:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:53.157 killing process with pid 62872 00:12:53.157 13:55:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62872' 00:12:53.157 13:55:32 -- common/autotest_common.sh@955 -- # kill 62872 00:12:53.157 13:55:32 -- common/autotest_common.sh@960 -- # wait 62872 00:12:55.694 13:55:35 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62872 00:12:55.694 13:55:35 -- common/autotest_common.sh@638 -- # local es=0 00:12:55.694 13:55:35 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 62872 00:12:55.694 13:55:35 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:12:55.694 13:55:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:55.694 13:55:35 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:12:55.694 13:55:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:55.694 13:55:35 -- common/autotest_common.sh@641 -- # waitforlisten 62872 00:12:55.694 13:55:35 -- common/autotest_common.sh@817 -- # '[' -z 62872 ']' 00:12:55.694 13:55:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.694 13:55:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:55.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.694 13:55:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.694 ERROR: process (pid: 62872) is no longer running 00:12:55.694 13:55:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:55.694 13:55:35 -- common/autotest_common.sh@10 -- # set +x 00:12:55.694 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (62872) - No such process 00:12:55.694 13:55:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:55.694 13:55:35 -- common/autotest_common.sh@850 -- # return 1 00:12:55.694 13:55:35 -- common/autotest_common.sh@641 -- # es=1 00:12:55.694 13:55:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:55.694 13:55:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:55.694 13:55:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:55.694 13:55:35 -- event/cpu_locks.sh@54 -- # no_locks 00:12:55.694 13:55:35 -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:55.694 13:55:35 -- event/cpu_locks.sh@26 -- # local lock_files 00:12:55.694 13:55:35 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:55.694 00:12:55.694 real 0m4.588s 00:12:55.694 user 0m4.520s 00:12:55.694 sys 0m0.688s 00:12:55.694 13:55:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:55.694 13:55:35 -- common/autotest_common.sh@10 -- # set +x 00:12:55.694 ************************************ 00:12:55.694 END TEST default_locks 00:12:55.694 ************************************ 00:12:55.694 13:55:35 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:12:55.694 13:55:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:55.694 13:55:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:55.694 13:55:35 -- common/autotest_common.sh@10 -- # set +x 00:12:55.951 ************************************ 00:12:55.952 START TEST default_locks_via_rpc 00:12:55.952 ************************************ 00:12:55.952 13:55:35 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:12:55.952 13:55:35 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62964 00:12:55.952 13:55:35 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:55.952 13:55:35 -- event/cpu_locks.sh@63 -- # waitforlisten 62964 00:12:55.952 13:55:35 -- common/autotest_common.sh@817 -- # '[' -z 62964 ']' 00:12:55.952 13:55:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.952 13:55:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:55.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.952 13:55:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.952 13:55:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:55.952 13:55:35 -- common/autotest_common.sh@10 -- # set +x 00:12:55.952 [2024-04-26 13:55:35.512242] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:12:55.952 [2024-04-26 13:55:35.512404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62964 ] 00:12:56.209 [2024-04-26 13:55:35.695092] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.467 [2024-04-26 13:55:35.948069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.405 13:55:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:57.405 13:55:36 -- common/autotest_common.sh@850 -- # return 0 00:12:57.405 13:55:36 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:12:57.405 13:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.405 13:55:36 -- common/autotest_common.sh@10 -- # set +x 00:12:57.405 13:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.405 13:55:36 -- event/cpu_locks.sh@67 -- # no_locks 00:12:57.405 13:55:36 -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:57.405 13:55:36 -- event/cpu_locks.sh@26 -- # local lock_files 00:12:57.405 13:55:36 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:57.405 13:55:36 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:12:57.405 13:55:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:57.405 13:55:36 -- common/autotest_common.sh@10 -- # set +x 00:12:57.405 13:55:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:57.405 13:55:36 -- event/cpu_locks.sh@71 -- # locks_exist 62964 00:12:57.405 13:55:36 -- event/cpu_locks.sh@22 -- # lslocks -p 62964 00:12:57.405 13:55:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:57.973 13:55:37 -- event/cpu_locks.sh@73 -- # killprocess 62964 00:12:57.973 13:55:37 -- common/autotest_common.sh@936 -- # '[' -z 62964 ']' 00:12:57.973 13:55:37 -- common/autotest_common.sh@940 -- # kill -0 62964 00:12:57.973 13:55:37 -- common/autotest_common.sh@941 -- # uname 00:12:57.973 13:55:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:57.973 13:55:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62964 00:12:57.973 13:55:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:57.973 13:55:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:57.973 13:55:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62964' 00:12:57.973 killing process with pid 62964 00:12:57.973 13:55:37 -- common/autotest_common.sh@955 -- # kill 62964 00:12:57.973 13:55:37 -- common/autotest_common.sh@960 -- # wait 62964 00:13:00.511 00:13:00.511 real 0m4.677s 00:13:00.511 user 0m4.639s 00:13:00.511 sys 0m0.705s 00:13:00.511 13:55:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:00.511 13:55:40 -- common/autotest_common.sh@10 -- # set +x 00:13:00.511 ************************************ 00:13:00.511 END TEST default_locks_via_rpc 00:13:00.511 ************************************ 00:13:00.511 13:55:40 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:13:00.511 13:55:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:00.511 13:55:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:00.511 13:55:40 -- common/autotest_common.sh@10 -- # set +x 00:13:00.770 ************************************ 00:13:00.770 START TEST non_locking_app_on_locked_coremask 00:13:00.770 ************************************ 00:13:00.770 13:55:40 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:13:00.770 13:55:40 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63071 00:13:00.770 13:55:40 -- event/cpu_locks.sh@81 -- # waitforlisten 63071 /var/tmp/spdk.sock 00:13:00.770 13:55:40 -- common/autotest_common.sh@817 -- # '[' -z 63071 ']' 00:13:00.770 13:55:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.770 13:55:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:00.770 13:55:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.770 13:55:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:00.770 13:55:40 -- common/autotest_common.sh@10 -- # set +x 00:13:00.770 13:55:40 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:00.770 [2024-04-26 13:55:40.326701] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:00.770 [2024-04-26 13:55:40.326827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63071 ] 00:13:01.056 [2024-04-26 13:55:40.499602] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.325 [2024-04-26 13:55:40.748983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.281 13:55:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:02.281 13:55:41 -- common/autotest_common.sh@850 -- # return 0 00:13:02.281 13:55:41 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63099 00:13:02.281 13:55:41 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:13:02.281 13:55:41 -- event/cpu_locks.sh@85 -- # waitforlisten 63099 /var/tmp/spdk2.sock 00:13:02.281 13:55:41 -- common/autotest_common.sh@817 -- # '[' -z 63099 ']' 00:13:02.281 13:55:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:02.281 13:55:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:02.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:02.281 13:55:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:02.281 13:55:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:02.281 13:55:41 -- common/autotest_common.sh@10 -- # set +x 00:13:02.281 [2024-04-26 13:55:41.871080] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:02.281 [2024-04-26 13:55:41.871208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63099 ] 00:13:02.540 [2024-04-26 13:55:42.041077] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:02.540 [2024-04-26 13:55:42.041134] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.108 [2024-04-26 13:55:42.545620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.143 13:55:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:05.143 13:55:44 -- common/autotest_common.sh@850 -- # return 0 00:13:05.143 13:55:44 -- event/cpu_locks.sh@87 -- # locks_exist 63071 00:13:05.143 13:55:44 -- event/cpu_locks.sh@22 -- # lslocks -p 63071 00:13:05.143 13:55:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:05.712 13:55:45 -- event/cpu_locks.sh@89 -- # killprocess 63071 00:13:05.712 13:55:45 -- common/autotest_common.sh@936 -- # '[' -z 63071 ']' 00:13:05.712 13:55:45 -- common/autotest_common.sh@940 -- # kill -0 63071 00:13:05.712 13:55:45 -- common/autotest_common.sh@941 -- # uname 00:13:05.712 13:55:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:05.712 13:55:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63071 00:13:05.712 13:55:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:05.712 13:55:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:05.712 killing process with pid 63071 00:13:05.712 13:55:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63071' 00:13:05.712 13:55:45 -- common/autotest_common.sh@955 -- # kill 63071 00:13:05.712 13:55:45 -- common/autotest_common.sh@960 -- # wait 63071 00:13:10.988 13:55:50 -- event/cpu_locks.sh@90 -- # killprocess 63099 00:13:10.988 13:55:50 -- common/autotest_common.sh@936 -- # '[' -z 63099 ']' 00:13:10.988 13:55:50 -- common/autotest_common.sh@940 -- # kill -0 63099 00:13:10.988 13:55:50 -- common/autotest_common.sh@941 -- # uname 00:13:10.988 13:55:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:10.988 13:55:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63099 00:13:10.988 13:55:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:10.988 13:55:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:10.988 killing process with pid 63099 00:13:10.988 13:55:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63099' 00:13:10.988 13:55:50 -- common/autotest_common.sh@955 -- # kill 63099 00:13:10.988 13:55:50 -- common/autotest_common.sh@960 -- # wait 63099 00:13:13.524 00:13:13.524 real 0m12.488s 00:13:13.524 user 0m12.626s 00:13:13.524 sys 0m1.386s 00:13:13.524 13:55:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:13.524 ************************************ 00:13:13.524 END TEST non_locking_app_on_locked_coremask 00:13:13.524 13:55:52 -- common/autotest_common.sh@10 -- # set +x 00:13:13.524 ************************************ 00:13:13.524 13:55:52 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:13:13.524 13:55:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:13.524 13:55:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:13.524 13:55:52 -- common/autotest_common.sh@10 -- # set +x 00:13:13.524 ************************************ 00:13:13.524 START TEST locking_app_on_unlocked_coremask 00:13:13.524 ************************************ 00:13:13.524 13:55:52 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:13:13.524 13:55:52 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63269 00:13:13.524 13:55:52 -- event/cpu_locks.sh@99 -- # waitforlisten 63269 /var/tmp/spdk.sock 00:13:13.524 13:55:52 -- common/autotest_common.sh@817 -- # '[' -z 63269 ']' 00:13:13.524 13:55:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.524 13:55:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:13.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.524 13:55:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.524 13:55:52 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:13:13.524 13:55:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:13.524 13:55:52 -- common/autotest_common.sh@10 -- # set +x 00:13:13.524 [2024-04-26 13:55:52.956841] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:13.524 [2024-04-26 13:55:52.956956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63269 ] 00:13:13.524 [2024-04-26 13:55:53.126821] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:13.524 [2024-04-26 13:55:53.126905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.783 [2024-04-26 13:55:53.358974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.746 13:55:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:14.746 13:55:54 -- common/autotest_common.sh@850 -- # return 0 00:13:14.746 13:55:54 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:14.746 13:55:54 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63309 00:13:14.746 13:55:54 -- event/cpu_locks.sh@103 -- # waitforlisten 63309 /var/tmp/spdk2.sock 00:13:14.746 13:55:54 -- common/autotest_common.sh@817 -- # '[' -z 63309 ']' 00:13:14.746 13:55:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:14.746 13:55:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:14.746 13:55:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:14.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:14.746 13:55:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:14.746 13:55:54 -- common/autotest_common.sh@10 -- # set +x 00:13:14.746 [2024-04-26 13:55:54.395013] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:14.746 [2024-04-26 13:55:54.395129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63309 ] 00:13:15.005 [2024-04-26 13:55:54.563684] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.572 [2024-04-26 13:55:55.024579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.476 13:55:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:17.476 13:55:56 -- common/autotest_common.sh@850 -- # return 0 00:13:17.476 13:55:56 -- event/cpu_locks.sh@105 -- # locks_exist 63309 00:13:17.476 13:55:56 -- event/cpu_locks.sh@22 -- # lslocks -p 63309 00:13:17.476 13:55:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:18.417 13:55:57 -- event/cpu_locks.sh@107 -- # killprocess 63269 00:13:18.417 13:55:57 -- common/autotest_common.sh@936 -- # '[' -z 63269 ']' 00:13:18.417 13:55:57 -- common/autotest_common.sh@940 -- # kill -0 63269 00:13:18.417 13:55:57 -- common/autotest_common.sh@941 -- # uname 00:13:18.417 13:55:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:18.417 13:55:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63269 00:13:18.417 13:55:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:18.417 13:55:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:18.417 killing process with pid 63269 00:13:18.417 13:55:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63269' 00:13:18.417 13:55:58 -- common/autotest_common.sh@955 -- # kill 63269 00:13:18.417 13:55:58 -- common/autotest_common.sh@960 -- # wait 63269 00:13:23.718 13:56:02 -- event/cpu_locks.sh@108 -- # killprocess 63309 00:13:23.718 13:56:02 -- common/autotest_common.sh@936 -- # '[' -z 63309 ']' 00:13:23.718 13:56:02 -- common/autotest_common.sh@940 -- # kill -0 63309 00:13:23.718 13:56:02 -- common/autotest_common.sh@941 -- # uname 00:13:23.718 13:56:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:23.718 13:56:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63309 00:13:23.718 13:56:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:23.718 13:56:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:23.718 killing process with pid 63309 00:13:23.718 13:56:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63309' 00:13:23.718 13:56:02 -- common/autotest_common.sh@955 -- # kill 63309 00:13:23.718 13:56:02 -- common/autotest_common.sh@960 -- # wait 63309 00:13:25.642 ************************************ 00:13:25.642 END TEST locking_app_on_unlocked_coremask 00:13:25.642 ************************************ 00:13:25.642 00:13:25.642 real 0m12.362s 00:13:25.642 user 0m12.516s 00:13:25.642 sys 0m1.440s 00:13:25.642 13:56:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:25.642 13:56:05 -- common/autotest_common.sh@10 -- # set +x 00:13:25.642 13:56:05 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:13:25.642 13:56:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:25.642 13:56:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:25.642 13:56:05 -- common/autotest_common.sh@10 -- # set +x 00:13:25.900 ************************************ 00:13:25.900 START TEST locking_app_on_locked_coremask 00:13:25.900 ************************************ 00:13:25.900 13:56:05 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:13:25.900 13:56:05 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63473 00:13:25.900 13:56:05 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:25.900 13:56:05 -- event/cpu_locks.sh@116 -- # waitforlisten 63473 /var/tmp/spdk.sock 00:13:25.900 13:56:05 -- common/autotest_common.sh@817 -- # '[' -z 63473 ']' 00:13:25.900 13:56:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.900 13:56:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:25.900 13:56:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.900 13:56:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:25.900 13:56:05 -- common/autotest_common.sh@10 -- # set +x 00:13:25.900 [2024-04-26 13:56:05.465035] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:25.900 [2024-04-26 13:56:05.465143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63473 ] 00:13:26.159 [2024-04-26 13:56:05.636263] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.416 [2024-04-26 13:56:05.868739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.354 13:56:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:27.354 13:56:06 -- common/autotest_common.sh@850 -- # return 0 00:13:27.354 13:56:06 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63512 00:13:27.354 13:56:06 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63512 /var/tmp/spdk2.sock 00:13:27.354 13:56:06 -- common/autotest_common.sh@638 -- # local es=0 00:13:27.354 13:56:06 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 63512 /var/tmp/spdk2.sock 00:13:27.354 13:56:06 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:27.354 13:56:06 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:13:27.354 13:56:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:27.354 13:56:06 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:13:27.354 13:56:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:27.354 13:56:06 -- common/autotest_common.sh@641 -- # waitforlisten 63512 /var/tmp/spdk2.sock 00:13:27.354 13:56:06 -- common/autotest_common.sh@817 -- # '[' -z 63512 ']' 00:13:27.354 13:56:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:27.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:27.354 13:56:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:27.354 13:56:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:27.354 13:56:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:27.354 13:56:06 -- common/autotest_common.sh@10 -- # set +x 00:13:27.354 [2024-04-26 13:56:06.889987] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:27.354 [2024-04-26 13:56:06.890100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63512 ] 00:13:27.613 [2024-04-26 13:56:07.055041] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63473 has claimed it. 00:13:27.613 [2024-04-26 13:56:07.055112] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:27.873 ERROR: process (pid: 63512) is no longer running 00:13:27.873 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (63512) - No such process 00:13:27.873 13:56:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:27.873 13:56:07 -- common/autotest_common.sh@850 -- # return 1 00:13:27.873 13:56:07 -- common/autotest_common.sh@641 -- # es=1 00:13:27.873 13:56:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:27.873 13:56:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:27.873 13:56:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:27.873 13:56:07 -- event/cpu_locks.sh@122 -- # locks_exist 63473 00:13:27.873 13:56:07 -- event/cpu_locks.sh@22 -- # lslocks -p 63473 00:13:27.873 13:56:07 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:28.441 13:56:07 -- event/cpu_locks.sh@124 -- # killprocess 63473 00:13:28.441 13:56:07 -- common/autotest_common.sh@936 -- # '[' -z 63473 ']' 00:13:28.441 13:56:07 -- common/autotest_common.sh@940 -- # kill -0 63473 00:13:28.441 13:56:07 -- common/autotest_common.sh@941 -- # uname 00:13:28.441 13:56:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:28.441 13:56:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63473 00:13:28.441 13:56:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:28.441 13:56:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:28.441 killing process with pid 63473 00:13:28.441 13:56:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63473' 00:13:28.441 13:56:07 -- common/autotest_common.sh@955 -- # kill 63473 00:13:28.441 13:56:07 -- common/autotest_common.sh@960 -- # wait 63473 00:13:30.983 00:13:30.983 real 0m4.975s 00:13:30.983 user 0m5.080s 00:13:30.983 sys 0m0.801s 00:13:30.983 13:56:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:30.983 ************************************ 00:13:30.983 END TEST locking_app_on_locked_coremask 00:13:30.983 ************************************ 00:13:30.983 13:56:10 -- common/autotest_common.sh@10 -- # set +x 00:13:30.983 13:56:10 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:13:30.983 13:56:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:30.983 13:56:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:30.983 13:56:10 -- common/autotest_common.sh@10 -- # set +x 00:13:30.983 ************************************ 00:13:30.983 START TEST locking_overlapped_coremask 00:13:30.983 ************************************ 00:13:30.983 13:56:10 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:13:30.983 13:56:10 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63586 00:13:30.983 13:56:10 -- event/cpu_locks.sh@133 -- # waitforlisten 63586 /var/tmp/spdk.sock 00:13:30.983 13:56:10 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:13:30.983 13:56:10 -- common/autotest_common.sh@817 -- # '[' -z 63586 ']' 00:13:30.983 13:56:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.983 13:56:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:30.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.983 13:56:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.983 13:56:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:30.983 13:56:10 -- common/autotest_common.sh@10 -- # set +x 00:13:30.983 [2024-04-26 13:56:10.585610] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:30.983 [2024-04-26 13:56:10.585723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63586 ] 00:13:31.243 [2024-04-26 13:56:10.750546] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:31.502 [2024-04-26 13:56:10.985812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.502 [2024-04-26 13:56:10.985956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.502 [2024-04-26 13:56:10.985988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.439 13:56:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:32.439 13:56:11 -- common/autotest_common.sh@850 -- # return 0 00:13:32.439 13:56:11 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63623 00:13:32.439 13:56:11 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:13:32.439 13:56:11 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63623 /var/tmp/spdk2.sock 00:13:32.439 13:56:11 -- common/autotest_common.sh@638 -- # local es=0 00:13:32.439 13:56:11 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 63623 /var/tmp/spdk2.sock 00:13:32.439 13:56:11 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:13:32.439 13:56:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:32.439 13:56:11 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:13:32.439 13:56:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:32.440 13:56:11 -- common/autotest_common.sh@641 -- # waitforlisten 63623 /var/tmp/spdk2.sock 00:13:32.440 13:56:11 -- common/autotest_common.sh@817 -- # '[' -z 63623 ']' 00:13:32.440 13:56:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:32.440 13:56:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:32.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:32.440 13:56:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:32.440 13:56:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:32.440 13:56:11 -- common/autotest_common.sh@10 -- # set +x 00:13:32.440 [2024-04-26 13:56:12.052970] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:32.440 [2024-04-26 13:56:12.053074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63623 ] 00:13:32.699 [2024-04-26 13:56:12.220991] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63586 has claimed it. 00:13:32.699 [2024-04-26 13:56:12.221058] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:33.267 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (63623) - No such process 00:13:33.267 ERROR: process (pid: 63623) is no longer running 00:13:33.267 13:56:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:33.267 13:56:12 -- common/autotest_common.sh@850 -- # return 1 00:13:33.267 13:56:12 -- common/autotest_common.sh@641 -- # es=1 00:13:33.267 13:56:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:33.267 13:56:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:33.267 13:56:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:33.267 13:56:12 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:13:33.267 13:56:12 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:33.267 13:56:12 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:33.267 13:56:12 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:33.267 13:56:12 -- event/cpu_locks.sh@141 -- # killprocess 63586 00:13:33.267 13:56:12 -- common/autotest_common.sh@936 -- # '[' -z 63586 ']' 00:13:33.267 13:56:12 -- common/autotest_common.sh@940 -- # kill -0 63586 00:13:33.267 13:56:12 -- common/autotest_common.sh@941 -- # uname 00:13:33.267 13:56:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:33.267 13:56:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63586 00:13:33.267 13:56:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:33.267 13:56:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:33.267 killing process with pid 63586 00:13:33.267 13:56:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63586' 00:13:33.267 13:56:12 -- common/autotest_common.sh@955 -- # kill 63586 00:13:33.267 13:56:12 -- common/autotest_common.sh@960 -- # wait 63586 00:13:35.872 00:13:35.872 real 0m4.624s 00:13:35.872 user 0m11.963s 00:13:35.872 sys 0m0.624s 00:13:35.872 13:56:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:35.872 13:56:15 -- common/autotest_common.sh@10 -- # set +x 00:13:35.872 ************************************ 00:13:35.872 END TEST locking_overlapped_coremask 00:13:35.872 ************************************ 00:13:35.872 13:56:15 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:13:35.872 13:56:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:35.872 13:56:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:35.872 13:56:15 -- common/autotest_common.sh@10 -- # set +x 00:13:35.872 ************************************ 00:13:35.872 START TEST locking_overlapped_coremask_via_rpc 00:13:35.872 ************************************ 00:13:35.872 13:56:15 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:13:35.872 13:56:15 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63702 00:13:35.872 13:56:15 -- event/cpu_locks.sh@149 -- # waitforlisten 63702 /var/tmp/spdk.sock 00:13:35.872 13:56:15 -- common/autotest_common.sh@817 -- # '[' -z 63702 ']' 00:13:35.872 13:56:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.872 13:56:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:35.872 13:56:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.872 13:56:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:35.872 13:56:15 -- common/autotest_common.sh@10 -- # set +x 00:13:35.872 13:56:15 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:13:35.872 [2024-04-26 13:56:15.363420] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:35.872 [2024-04-26 13:56:15.363535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63702 ] 00:13:35.872 [2024-04-26 13:56:15.534374] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:35.872 [2024-04-26 13:56:15.534435] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:36.131 [2024-04-26 13:56:15.770308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.131 [2024-04-26 13:56:15.771582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.131 [2024-04-26 13:56:15.771613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.508 13:56:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:37.508 13:56:16 -- common/autotest_common.sh@850 -- # return 0 00:13:37.508 13:56:16 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:13:37.508 13:56:16 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63733 00:13:37.508 13:56:16 -- event/cpu_locks.sh@153 -- # waitforlisten 63733 /var/tmp/spdk2.sock 00:13:37.508 13:56:16 -- common/autotest_common.sh@817 -- # '[' -z 63733 ']' 00:13:37.508 13:56:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:37.508 13:56:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:37.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:37.508 13:56:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:37.508 13:56:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:37.508 13:56:16 -- common/autotest_common.sh@10 -- # set +x 00:13:37.508 [2024-04-26 13:56:16.835571] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:37.508 [2024-04-26 13:56:16.835685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63733 ] 00:13:37.508 [2024-04-26 13:56:17.001814] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:37.508 [2024-04-26 13:56:17.001872] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:38.077 [2024-04-26 13:56:17.472264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.077 [2024-04-26 13:56:17.472397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.077 [2024-04-26 13:56:17.472430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:39.984 13:56:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:39.984 13:56:19 -- common/autotest_common.sh@850 -- # return 0 00:13:39.984 13:56:19 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:13:39.984 13:56:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.984 13:56:19 -- common/autotest_common.sh@10 -- # set +x 00:13:39.984 13:56:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.984 13:56:19 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:39.984 13:56:19 -- common/autotest_common.sh@638 -- # local es=0 00:13:39.984 13:56:19 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:39.984 13:56:19 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:13:39.984 13:56:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:39.984 13:56:19 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:13:39.984 13:56:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:39.984 13:56:19 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:39.984 13:56:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.984 13:56:19 -- common/autotest_common.sh@10 -- # set +x 00:13:39.984 [2024-04-26 13:56:19.372356] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63702 has claimed it. 00:13:39.984 2024/04/26 13:56:19 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:13:39.984 request: 00:13:39.984 { 00:13:39.984 "method": "framework_enable_cpumask_locks", 00:13:39.984 "params": {} 00:13:39.984 } 00:13:39.984 Got JSON-RPC error response 00:13:39.984 GoRPCClient: error on JSON-RPC call 00:13:39.984 13:56:19 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:13:39.984 13:56:19 -- common/autotest_common.sh@641 -- # es=1 00:13:39.984 13:56:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:39.984 13:56:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:39.984 13:56:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:39.984 13:56:19 -- event/cpu_locks.sh@158 -- # waitforlisten 63702 /var/tmp/spdk.sock 00:13:39.984 13:56:19 -- common/autotest_common.sh@817 -- # '[' -z 63702 ']' 00:13:39.984 13:56:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.984 13:56:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:39.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.984 13:56:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.984 13:56:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:39.984 13:56:19 -- common/autotest_common.sh@10 -- # set +x 00:13:39.984 13:56:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:39.984 13:56:19 -- common/autotest_common.sh@850 -- # return 0 00:13:39.984 13:56:19 -- event/cpu_locks.sh@159 -- # waitforlisten 63733 /var/tmp/spdk2.sock 00:13:39.984 13:56:19 -- common/autotest_common.sh@817 -- # '[' -z 63733 ']' 00:13:39.984 13:56:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:39.984 13:56:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:39.984 13:56:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:39.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:39.984 13:56:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:39.984 13:56:19 -- common/autotest_common.sh@10 -- # set +x 00:13:40.244 13:56:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:40.244 13:56:19 -- common/autotest_common.sh@850 -- # return 0 00:13:40.244 13:56:19 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:13:40.244 13:56:19 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:40.244 13:56:19 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:40.244 13:56:19 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:40.244 00:13:40.244 real 0m4.509s 00:13:40.244 user 0m1.003s 00:13:40.244 sys 0m0.235s 00:13:40.244 13:56:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:40.244 13:56:19 -- common/autotest_common.sh@10 -- # set +x 00:13:40.244 ************************************ 00:13:40.244 END TEST locking_overlapped_coremask_via_rpc 00:13:40.244 ************************************ 00:13:40.244 13:56:19 -- event/cpu_locks.sh@174 -- # cleanup 00:13:40.244 13:56:19 -- event/cpu_locks.sh@15 -- # [[ -z 63702 ]] 00:13:40.244 13:56:19 -- event/cpu_locks.sh@15 -- # killprocess 63702 00:13:40.244 13:56:19 -- common/autotest_common.sh@936 -- # '[' -z 63702 ']' 00:13:40.244 13:56:19 -- common/autotest_common.sh@940 -- # kill -0 63702 00:13:40.244 13:56:19 -- common/autotest_common.sh@941 -- # uname 00:13:40.244 13:56:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:40.244 13:56:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63702 00:13:40.244 13:56:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:40.244 13:56:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:40.244 killing process with pid 63702 00:13:40.244 13:56:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63702' 00:13:40.244 13:56:19 -- common/autotest_common.sh@955 -- # kill 63702 00:13:40.244 13:56:19 -- common/autotest_common.sh@960 -- # wait 63702 00:13:42.779 13:56:22 -- event/cpu_locks.sh@16 -- # [[ -z 63733 ]] 00:13:42.779 13:56:22 -- event/cpu_locks.sh@16 -- # killprocess 63733 00:13:42.779 13:56:22 -- common/autotest_common.sh@936 -- # '[' -z 63733 ']' 00:13:42.779 13:56:22 -- common/autotest_common.sh@940 -- # kill -0 63733 00:13:42.779 13:56:22 -- common/autotest_common.sh@941 -- # uname 00:13:42.779 13:56:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:42.779 13:56:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63733 00:13:42.779 13:56:22 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:13:42.779 13:56:22 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:13:42.779 killing process with pid 63733 00:13:42.779 13:56:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63733' 00:13:42.779 13:56:22 -- common/autotest_common.sh@955 -- # kill 63733 00:13:42.779 13:56:22 -- common/autotest_common.sh@960 -- # wait 63733 00:13:45.311 13:56:24 -- event/cpu_locks.sh@18 -- # rm -f 00:13:45.311 13:56:24 -- event/cpu_locks.sh@1 -- # cleanup 00:13:45.311 13:56:24 -- event/cpu_locks.sh@15 -- # [[ -z 63702 ]] 00:13:45.311 13:56:24 -- event/cpu_locks.sh@15 -- # killprocess 63702 00:13:45.311 13:56:24 -- common/autotest_common.sh@936 -- # '[' -z 63702 ']' 00:13:45.311 13:56:24 -- common/autotest_common.sh@940 -- # kill -0 63702 00:13:45.311 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (63702) - No such process 00:13:45.311 Process with pid 63702 is not found 00:13:45.311 13:56:24 -- common/autotest_common.sh@963 -- # echo 'Process with pid 63702 is not found' 00:13:45.311 13:56:24 -- event/cpu_locks.sh@16 -- # [[ -z 63733 ]] 00:13:45.311 13:56:24 -- event/cpu_locks.sh@16 -- # killprocess 63733 00:13:45.311 13:56:24 -- common/autotest_common.sh@936 -- # '[' -z 63733 ']' 00:13:45.311 13:56:24 -- common/autotest_common.sh@940 -- # kill -0 63733 00:13:45.311 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (63733) - No such process 00:13:45.311 Process with pid 63733 is not found 00:13:45.311 13:56:24 -- common/autotest_common.sh@963 -- # echo 'Process with pid 63733 is not found' 00:13:45.311 13:56:24 -- event/cpu_locks.sh@18 -- # rm -f 00:13:45.311 ************************************ 00:13:45.311 END TEST cpu_locks 00:13:45.311 00:13:45.311 real 0m54.314s 00:13:45.311 user 1m27.432s 00:13:45.311 sys 0m7.349s 00:13:45.311 13:56:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:45.311 13:56:24 -- common/autotest_common.sh@10 -- # set +x 00:13:45.311 ************************************ 00:13:45.311 00:13:45.311 real 1m26.197s 00:13:45.311 user 2m25.637s 00:13:45.311 sys 0m12.087s 00:13:45.311 13:56:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:45.311 13:56:24 -- common/autotest_common.sh@10 -- # set +x 00:13:45.311 ************************************ 00:13:45.311 END TEST event 00:13:45.311 ************************************ 00:13:45.311 13:56:24 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:45.311 13:56:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:45.311 13:56:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:45.311 13:56:24 -- common/autotest_common.sh@10 -- # set +x 00:13:45.311 ************************************ 00:13:45.311 START TEST thread 00:13:45.311 ************************************ 00:13:45.311 13:56:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:45.568 * Looking for test storage... 00:13:45.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:13:45.568 13:56:25 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:45.568 13:56:25 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:13:45.568 13:56:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:45.568 13:56:25 -- common/autotest_common.sh@10 -- # set +x 00:13:45.568 ************************************ 00:13:45.568 START TEST thread_poller_perf 00:13:45.568 ************************************ 00:13:45.568 13:56:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:45.568 [2024-04-26 13:56:25.213285] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:45.568 [2024-04-26 13:56:25.213394] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63958 ] 00:13:45.826 [2024-04-26 13:56:25.381862] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.084 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:13:46.084 [2024-04-26 13:56:25.614619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.460 ====================================== 00:13:47.460 busy:2500130562 (cyc) 00:13:47.460 total_run_count: 387000 00:13:47.460 tsc_hz: 2490000000 (cyc) 00:13:47.460 ====================================== 00:13:47.460 poller_cost: 6460 (cyc), 2594 (nsec) 00:13:47.460 00:13:47.460 real 0m1.879s 00:13:47.460 user 0m1.652s 00:13:47.460 sys 0m0.119s 00:13:47.460 13:56:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:47.460 13:56:27 -- common/autotest_common.sh@10 -- # set +x 00:13:47.460 ************************************ 00:13:47.460 END TEST thread_poller_perf 00:13:47.460 ************************************ 00:13:47.460 13:56:27 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:47.460 13:56:27 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:13:47.460 13:56:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:47.460 13:56:27 -- common/autotest_common.sh@10 -- # set +x 00:13:47.719 ************************************ 00:13:47.719 START TEST thread_poller_perf 00:13:47.719 ************************************ 00:13:47.719 13:56:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:47.719 [2024-04-26 13:56:27.246581] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:47.719 [2024-04-26 13:56:27.246694] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64004 ] 00:13:47.978 [2024-04-26 13:56:27.414107] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.978 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:13:47.978 [2024-04-26 13:56:27.644567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.882 ====================================== 00:13:49.882 busy:2493643012 (cyc) 00:13:49.882 total_run_count: 5068000 00:13:49.882 tsc_hz: 2490000000 (cyc) 00:13:49.882 ====================================== 00:13:49.882 poller_cost: 492 (cyc), 197 (nsec) 00:13:49.882 00:13:49.882 real 0m1.844s 00:13:49.882 user 0m1.638s 00:13:49.882 sys 0m0.098s 00:13:49.882 13:56:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:49.882 13:56:29 -- common/autotest_common.sh@10 -- # set +x 00:13:49.882 ************************************ 00:13:49.882 END TEST thread_poller_perf 00:13:49.882 ************************************ 00:13:49.882 13:56:29 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:13:49.882 00:13:49.882 real 0m4.167s 00:13:49.882 user 0m3.454s 00:13:49.882 sys 0m0.466s 00:13:49.882 13:56:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:49.882 13:56:29 -- common/autotest_common.sh@10 -- # set +x 00:13:49.882 ************************************ 00:13:49.882 END TEST thread 00:13:49.882 ************************************ 00:13:49.882 13:56:29 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:13:49.882 13:56:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:49.882 13:56:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:49.882 13:56:29 -- common/autotest_common.sh@10 -- # set +x 00:13:49.882 ************************************ 00:13:49.882 START TEST accel 00:13:49.882 ************************************ 00:13:49.882 13:56:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:13:49.882 * Looking for test storage... 00:13:49.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:13:49.882 13:56:29 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:13:49.882 13:56:29 -- accel/accel.sh@82 -- # get_expected_opcs 00:13:49.882 13:56:29 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:49.882 13:56:29 -- accel/accel.sh@62 -- # spdk_tgt_pid=64090 00:13:49.882 13:56:29 -- accel/accel.sh@63 -- # waitforlisten 64090 00:13:49.882 13:56:29 -- common/autotest_common.sh@817 -- # '[' -z 64090 ']' 00:13:49.882 13:56:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.882 13:56:29 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:13:49.882 13:56:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:49.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.882 13:56:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.882 13:56:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:49.882 13:56:29 -- common/autotest_common.sh@10 -- # set +x 00:13:49.882 13:56:29 -- accel/accel.sh@61 -- # build_accel_config 00:13:49.882 13:56:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:49.882 13:56:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:49.882 13:56:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:49.882 13:56:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:49.882 13:56:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:49.882 13:56:29 -- accel/accel.sh@40 -- # local IFS=, 00:13:49.882 13:56:29 -- accel/accel.sh@41 -- # jq -r . 00:13:49.882 [2024-04-26 13:56:29.504890] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:49.882 [2024-04-26 13:56:29.505011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64090 ] 00:13:50.141 [2024-04-26 13:56:29.676600] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.399 [2024-04-26 13:56:29.905948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.333 13:56:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:51.333 13:56:30 -- common/autotest_common.sh@850 -- # return 0 00:13:51.333 13:56:30 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:13:51.333 13:56:30 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:13:51.333 13:56:30 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:13:51.333 13:56:30 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:13:51.333 13:56:30 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:13:51.333 13:56:30 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:13:51.333 13:56:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:51.333 13:56:30 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:13:51.333 13:56:30 -- common/autotest_common.sh@10 -- # set +x 00:13:51.333 13:56:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:51.333 13:56:30 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # IFS== 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # read -r opc module 00:13:51.333 13:56:30 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:51.333 13:56:30 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # IFS== 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # read -r opc module 00:13:51.333 13:56:30 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:51.333 13:56:30 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # IFS== 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # read -r opc module 00:13:51.333 13:56:30 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:51.333 13:56:30 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # IFS== 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # read -r opc module 00:13:51.333 13:56:30 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:51.333 13:56:30 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # IFS== 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # read -r opc module 00:13:51.333 13:56:30 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:51.333 13:56:30 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # IFS== 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # read -r opc module 00:13:51.333 13:56:30 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:51.333 13:56:30 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # IFS== 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # read -r opc module 00:13:51.333 13:56:30 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:51.333 13:56:30 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # IFS== 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # read -r opc module 00:13:51.333 13:56:30 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:51.333 13:56:30 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # IFS== 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # read -r opc module 00:13:51.333 13:56:30 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:51.333 13:56:30 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # IFS== 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # read -r opc module 00:13:51.333 13:56:30 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:51.333 13:56:30 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # IFS== 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # read -r opc module 00:13:51.333 13:56:30 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:51.333 13:56:30 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # IFS== 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # read -r opc module 00:13:51.333 13:56:30 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:51.333 13:56:30 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # IFS== 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # read -r opc module 00:13:51.333 13:56:30 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:51.333 13:56:30 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # IFS== 00:13:51.333 13:56:30 -- accel/accel.sh@72 -- # read -r opc module 00:13:51.333 13:56:30 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:51.333 13:56:30 -- accel/accel.sh@75 -- # killprocess 64090 00:13:51.333 13:56:30 -- common/autotest_common.sh@936 -- # '[' -z 64090 ']' 00:13:51.334 13:56:30 -- common/autotest_common.sh@940 -- # kill -0 64090 00:13:51.334 13:56:30 -- common/autotest_common.sh@941 -- # uname 00:13:51.334 13:56:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:51.334 13:56:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64090 00:13:51.334 13:56:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:51.334 13:56:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:51.334 killing process with pid 64090 00:13:51.334 13:56:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64090' 00:13:51.334 13:56:30 -- common/autotest_common.sh@955 -- # kill 64090 00:13:51.334 13:56:30 -- common/autotest_common.sh@960 -- # wait 64090 00:13:53.899 13:56:33 -- accel/accel.sh@76 -- # trap - ERR 00:13:53.899 13:56:33 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:13:53.899 13:56:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:53.899 13:56:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:53.899 13:56:33 -- common/autotest_common.sh@10 -- # set +x 00:13:53.899 13:56:33 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:13:53.899 13:56:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:13:53.899 13:56:33 -- accel/accel.sh@12 -- # build_accel_config 00:13:53.899 13:56:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:53.899 13:56:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:53.899 13:56:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:53.899 13:56:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:53.899 13:56:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:53.899 13:56:33 -- accel/accel.sh@40 -- # local IFS=, 00:13:53.899 13:56:33 -- accel/accel.sh@41 -- # jq -r . 00:13:53.899 13:56:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:53.899 13:56:33 -- common/autotest_common.sh@10 -- # set +x 00:13:54.158 13:56:33 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:13:54.158 13:56:33 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:13:54.158 13:56:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:54.158 13:56:33 -- common/autotest_common.sh@10 -- # set +x 00:13:54.158 ************************************ 00:13:54.158 START TEST accel_missing_filename 00:13:54.158 ************************************ 00:13:54.158 13:56:33 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:13:54.158 13:56:33 -- common/autotest_common.sh@638 -- # local es=0 00:13:54.158 13:56:33 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:13:54.158 13:56:33 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:13:54.158 13:56:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:54.158 13:56:33 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:13:54.158 13:56:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:54.158 13:56:33 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:13:54.158 13:56:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:13:54.158 13:56:33 -- accel/accel.sh@12 -- # build_accel_config 00:13:54.158 13:56:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:54.158 13:56:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:54.158 13:56:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:54.158 13:56:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:54.158 13:56:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:54.158 13:56:33 -- accel/accel.sh@40 -- # local IFS=, 00:13:54.158 13:56:33 -- accel/accel.sh@41 -- # jq -r . 00:13:54.158 [2024-04-26 13:56:33.748965] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:54.158 [2024-04-26 13:56:33.749072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64197 ] 00:13:54.417 [2024-04-26 13:56:33.918651] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.676 [2024-04-26 13:56:34.153303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.935 [2024-04-26 13:56:34.407240] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:55.504 [2024-04-26 13:56:34.937779] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:13:55.764 A filename is required. 00:13:55.764 13:56:35 -- common/autotest_common.sh@641 -- # es=234 00:13:55.764 13:56:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:55.764 13:56:35 -- common/autotest_common.sh@650 -- # es=106 00:13:55.764 13:56:35 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:55.764 13:56:35 -- common/autotest_common.sh@658 -- # es=1 00:13:55.764 13:56:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:55.764 00:13:55.764 real 0m1.663s 00:13:55.764 user 0m1.411s 00:13:55.764 sys 0m0.183s 00:13:55.764 13:56:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:55.764 13:56:35 -- common/autotest_common.sh@10 -- # set +x 00:13:55.764 ************************************ 00:13:55.764 END TEST accel_missing_filename 00:13:55.764 ************************************ 00:13:55.764 13:56:35 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:55.764 13:56:35 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:13:55.764 13:56:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:55.764 13:56:35 -- common/autotest_common.sh@10 -- # set +x 00:13:56.023 ************************************ 00:13:56.023 START TEST accel_compress_verify 00:13:56.023 ************************************ 00:13:56.023 13:56:35 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:56.023 13:56:35 -- common/autotest_common.sh@638 -- # local es=0 00:13:56.023 13:56:35 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:56.023 13:56:35 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:13:56.023 13:56:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:56.023 13:56:35 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:13:56.023 13:56:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:56.023 13:56:35 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:56.023 13:56:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:56.023 13:56:35 -- accel/accel.sh@12 -- # build_accel_config 00:13:56.023 13:56:35 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:56.023 13:56:35 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:56.023 13:56:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:56.023 13:56:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:56.023 13:56:35 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:56.023 13:56:35 -- accel/accel.sh@40 -- # local IFS=, 00:13:56.023 13:56:35 -- accel/accel.sh@41 -- # jq -r . 00:13:56.023 [2024-04-26 13:56:35.559526] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:56.024 [2024-04-26 13:56:35.559628] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64239 ] 00:13:56.284 [2024-04-26 13:56:35.728726] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.284 [2024-04-26 13:56:35.955079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.544 [2024-04-26 13:56:36.209298] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:57.115 [2024-04-26 13:56:36.752501] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:13:57.684 00:13:57.684 Compression does not support the verify option, aborting. 00:13:57.684 13:56:37 -- common/autotest_common.sh@641 -- # es=161 00:13:57.684 13:56:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:57.684 13:56:37 -- common/autotest_common.sh@650 -- # es=33 00:13:57.684 13:56:37 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:57.684 13:56:37 -- common/autotest_common.sh@658 -- # es=1 00:13:57.684 13:56:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:57.684 00:13:57.684 real 0m1.676s 00:13:57.684 user 0m1.431s 00:13:57.684 sys 0m0.181s 00:13:57.684 13:56:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:57.684 13:56:37 -- common/autotest_common.sh@10 -- # set +x 00:13:57.684 ************************************ 00:13:57.684 END TEST accel_compress_verify 00:13:57.685 ************************************ 00:13:57.685 13:56:37 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:13:57.685 13:56:37 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:13:57.685 13:56:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:57.685 13:56:37 -- common/autotest_common.sh@10 -- # set +x 00:13:57.685 ************************************ 00:13:57.685 START TEST accel_wrong_workload 00:13:57.685 ************************************ 00:13:57.685 13:56:37 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:13:57.685 13:56:37 -- common/autotest_common.sh@638 -- # local es=0 00:13:57.685 13:56:37 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:13:57.685 13:56:37 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:13:57.685 13:56:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:57.685 13:56:37 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:13:57.685 13:56:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:57.685 13:56:37 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:13:57.685 13:56:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:13:57.685 13:56:37 -- accel/accel.sh@12 -- # build_accel_config 00:13:57.685 13:56:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:57.685 13:56:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:57.685 13:56:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:57.685 13:56:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:57.685 13:56:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:57.685 13:56:37 -- accel/accel.sh@40 -- # local IFS=, 00:13:57.685 13:56:37 -- accel/accel.sh@41 -- # jq -r . 00:13:57.944 Unsupported workload type: foobar 00:13:57.944 [2024-04-26 13:56:37.366402] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:13:57.944 accel_perf options: 00:13:57.944 [-h help message] 00:13:57.944 [-q queue depth per core] 00:13:57.944 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:13:57.944 [-T number of threads per core 00:13:57.944 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:13:57.944 [-t time in seconds] 00:13:57.945 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:13:57.945 [ dif_verify, , dif_generate, dif_generate_copy 00:13:57.945 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:13:57.945 [-l for compress/decompress workloads, name of uncompressed input file 00:13:57.945 [-S for crc32c workload, use this seed value (default 0) 00:13:57.945 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:13:57.945 [-f for fill workload, use this BYTE value (default 255) 00:13:57.945 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:13:57.945 [-y verify result if this switch is on] 00:13:57.945 [-a tasks to allocate per core (default: same value as -q)] 00:13:57.945 Can be used to spread operations across a wider range of memory. 00:13:57.945 13:56:37 -- common/autotest_common.sh@641 -- # es=1 00:13:57.945 13:56:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:57.945 13:56:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:57.945 13:56:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:57.945 00:13:57.945 real 0m0.076s 00:13:57.945 user 0m0.072s 00:13:57.945 sys 0m0.045s 00:13:57.945 13:56:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:57.945 13:56:37 -- common/autotest_common.sh@10 -- # set +x 00:13:57.945 ************************************ 00:13:57.945 END TEST accel_wrong_workload 00:13:57.945 ************************************ 00:13:57.945 13:56:37 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:13:57.945 13:56:37 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:13:57.945 13:56:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:57.945 13:56:37 -- common/autotest_common.sh@10 -- # set +x 00:13:57.945 ************************************ 00:13:57.945 START TEST accel_negative_buffers 00:13:57.945 ************************************ 00:13:57.945 13:56:37 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:13:57.945 13:56:37 -- common/autotest_common.sh@638 -- # local es=0 00:13:57.945 13:56:37 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:13:57.945 13:56:37 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:13:57.945 13:56:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:57.945 13:56:37 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:13:57.945 13:56:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:57.945 13:56:37 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:13:57.945 13:56:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:13:57.945 13:56:37 -- accel/accel.sh@12 -- # build_accel_config 00:13:57.945 13:56:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:57.945 13:56:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:57.945 13:56:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:57.945 13:56:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:57.945 13:56:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:57.945 13:56:37 -- accel/accel.sh@40 -- # local IFS=, 00:13:57.945 13:56:37 -- accel/accel.sh@41 -- # jq -r . 00:13:57.945 -x option must be non-negative. 00:13:57.945 [2024-04-26 13:56:37.581876] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:13:57.945 accel_perf options: 00:13:57.945 [-h help message] 00:13:57.945 [-q queue depth per core] 00:13:57.945 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:13:57.945 [-T number of threads per core 00:13:57.945 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:13:57.945 [-t time in seconds] 00:13:57.945 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:13:57.945 [ dif_verify, , dif_generate, dif_generate_copy 00:13:57.945 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:13:57.945 [-l for compress/decompress workloads, name of uncompressed input file 00:13:57.945 [-S for crc32c workload, use this seed value (default 0) 00:13:57.945 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:13:57.945 [-f for fill workload, use this BYTE value (default 255) 00:13:57.945 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:13:57.945 [-y verify result if this switch is on] 00:13:57.945 [-a tasks to allocate per core (default: same value as -q)] 00:13:57.945 Can be used to spread operations across a wider range of memory. 00:13:57.945 13:56:37 -- common/autotest_common.sh@641 -- # es=1 00:13:57.945 13:56:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:57.945 ************************************ 00:13:57.945 END TEST accel_negative_buffers 00:13:57.945 ************************************ 00:13:57.945 13:56:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:13:57.945 13:56:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:57.945 00:13:57.945 real 0m0.087s 00:13:57.945 user 0m0.075s 00:13:57.945 sys 0m0.054s 00:13:57.945 13:56:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:57.945 13:56:37 -- common/autotest_common.sh@10 -- # set +x 00:13:58.205 13:56:37 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:13:58.205 13:56:37 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:13:58.205 13:56:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:58.205 13:56:37 -- common/autotest_common.sh@10 -- # set +x 00:13:58.205 ************************************ 00:13:58.205 START TEST accel_crc32c 00:13:58.205 ************************************ 00:13:58.205 13:56:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:13:58.205 13:56:37 -- accel/accel.sh@16 -- # local accel_opc 00:13:58.205 13:56:37 -- accel/accel.sh@17 -- # local accel_module 00:13:58.205 13:56:37 -- accel/accel.sh@19 -- # IFS=: 00:13:58.205 13:56:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:13:58.205 13:56:37 -- accel/accel.sh@19 -- # read -r var val 00:13:58.205 13:56:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:13:58.205 13:56:37 -- accel/accel.sh@12 -- # build_accel_config 00:13:58.205 13:56:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:58.205 13:56:37 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:58.205 13:56:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:58.205 13:56:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:58.205 13:56:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:58.205 13:56:37 -- accel/accel.sh@40 -- # local IFS=, 00:13:58.205 13:56:37 -- accel/accel.sh@41 -- # jq -r . 00:13:58.205 [2024-04-26 13:56:37.802656] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:13:58.205 [2024-04-26 13:56:37.802804] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64335 ] 00:13:58.511 [2024-04-26 13:56:37.980852] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.787 [2024-04-26 13:56:38.213146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.787 13:56:38 -- accel/accel.sh@20 -- # val= 00:13:58.787 13:56:38 -- accel/accel.sh@21 -- # case "$var" in 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # IFS=: 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # read -r var val 00:13:58.787 13:56:38 -- accel/accel.sh@20 -- # val= 00:13:58.787 13:56:38 -- accel/accel.sh@21 -- # case "$var" in 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # IFS=: 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # read -r var val 00:13:58.787 13:56:38 -- accel/accel.sh@20 -- # val=0x1 00:13:58.787 13:56:38 -- accel/accel.sh@21 -- # case "$var" in 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # IFS=: 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # read -r var val 00:13:58.787 13:56:38 -- accel/accel.sh@20 -- # val= 00:13:58.787 13:56:38 -- accel/accel.sh@21 -- # case "$var" in 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # IFS=: 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # read -r var val 00:13:58.787 13:56:38 -- accel/accel.sh@20 -- # val= 00:13:58.787 13:56:38 -- accel/accel.sh@21 -- # case "$var" in 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # IFS=: 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # read -r var val 00:13:58.787 13:56:38 -- accel/accel.sh@20 -- # val=crc32c 00:13:58.787 13:56:38 -- accel/accel.sh@21 -- # case "$var" in 00:13:58.787 13:56:38 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # IFS=: 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # read -r var val 00:13:58.787 13:56:38 -- accel/accel.sh@20 -- # val=32 00:13:58.787 13:56:38 -- accel/accel.sh@21 -- # case "$var" in 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # IFS=: 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # read -r var val 00:13:58.787 13:56:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:58.787 13:56:38 -- accel/accel.sh@21 -- # case "$var" in 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # IFS=: 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # read -r var val 00:13:58.787 13:56:38 -- accel/accel.sh@20 -- # val= 00:13:58.787 13:56:38 -- accel/accel.sh@21 -- # case "$var" in 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # IFS=: 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # read -r var val 00:13:58.787 13:56:38 -- accel/accel.sh@20 -- # val=software 00:13:58.787 13:56:38 -- accel/accel.sh@21 -- # case "$var" in 00:13:58.787 13:56:38 -- accel/accel.sh@22 -- # accel_module=software 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # IFS=: 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # read -r var val 00:13:58.787 13:56:38 -- accel/accel.sh@20 -- # val=32 00:13:58.787 13:56:38 -- accel/accel.sh@21 -- # case "$var" in 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # IFS=: 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # read -r var val 00:13:58.787 13:56:38 -- accel/accel.sh@20 -- # val=32 00:13:58.787 13:56:38 -- accel/accel.sh@21 -- # case "$var" in 00:13:58.787 13:56:38 -- accel/accel.sh@19 -- # IFS=: 00:13:58.788 13:56:38 -- accel/accel.sh@19 -- # read -r var val 00:13:58.788 13:56:38 -- accel/accel.sh@20 -- # val=1 00:13:58.788 13:56:38 -- accel/accel.sh@21 -- # case "$var" in 00:13:58.788 13:56:38 -- accel/accel.sh@19 -- # IFS=: 00:13:58.788 13:56:38 -- accel/accel.sh@19 -- # read -r var val 00:13:58.788 13:56:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:59.047 13:56:38 -- accel/accel.sh@21 -- # case "$var" in 00:13:59.047 13:56:38 -- accel/accel.sh@19 -- # IFS=: 00:13:59.047 13:56:38 -- accel/accel.sh@19 -- # read -r var val 00:13:59.047 13:56:38 -- accel/accel.sh@20 -- # val=Yes 00:13:59.047 13:56:38 -- accel/accel.sh@21 -- # case "$var" in 00:13:59.047 13:56:38 -- accel/accel.sh@19 -- # IFS=: 00:13:59.047 13:56:38 -- accel/accel.sh@19 -- # read -r var val 00:13:59.047 13:56:38 -- accel/accel.sh@20 -- # val= 00:13:59.047 13:56:38 -- accel/accel.sh@21 -- # case "$var" in 00:13:59.047 13:56:38 -- accel/accel.sh@19 -- # IFS=: 00:13:59.047 13:56:38 -- accel/accel.sh@19 -- # read -r var val 00:13:59.047 13:56:38 -- accel/accel.sh@20 -- # val= 00:13:59.047 13:56:38 -- accel/accel.sh@21 -- # case "$var" in 00:13:59.047 13:56:38 -- accel/accel.sh@19 -- # IFS=: 00:13:59.047 13:56:38 -- accel/accel.sh@19 -- # read -r var val 00:14:00.952 13:56:40 -- accel/accel.sh@20 -- # val= 00:14:00.952 13:56:40 -- accel/accel.sh@21 -- # case "$var" in 00:14:00.952 13:56:40 -- accel/accel.sh@19 -- # IFS=: 00:14:00.952 13:56:40 -- accel/accel.sh@19 -- # read -r var val 00:14:00.952 13:56:40 -- accel/accel.sh@20 -- # val= 00:14:00.952 13:56:40 -- accel/accel.sh@21 -- # case "$var" in 00:14:00.952 13:56:40 -- accel/accel.sh@19 -- # IFS=: 00:14:00.952 13:56:40 -- accel/accel.sh@19 -- # read -r var val 00:14:00.952 13:56:40 -- accel/accel.sh@20 -- # val= 00:14:00.952 13:56:40 -- accel/accel.sh@21 -- # case "$var" in 00:14:00.952 13:56:40 -- accel/accel.sh@19 -- # IFS=: 00:14:00.952 13:56:40 -- accel/accel.sh@19 -- # read -r var val 00:14:00.952 13:56:40 -- accel/accel.sh@20 -- # val= 00:14:00.952 13:56:40 -- accel/accel.sh@21 -- # case "$var" in 00:14:00.952 13:56:40 -- accel/accel.sh@19 -- # IFS=: 00:14:00.952 13:56:40 -- accel/accel.sh@19 -- # read -r var val 00:14:00.952 13:56:40 -- accel/accel.sh@20 -- # val= 00:14:00.952 13:56:40 -- accel/accel.sh@21 -- # case "$var" in 00:14:00.952 13:56:40 -- accel/accel.sh@19 -- # IFS=: 00:14:00.952 13:56:40 -- accel/accel.sh@19 -- # read -r var val 00:14:00.952 13:56:40 -- accel/accel.sh@20 -- # val= 00:14:00.952 13:56:40 -- accel/accel.sh@21 -- # case "$var" in 00:14:00.952 13:56:40 -- accel/accel.sh@19 -- # IFS=: 00:14:00.952 13:56:40 -- accel/accel.sh@19 -- # read -r var val 00:14:00.952 13:56:40 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:00.952 13:56:40 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:14:00.952 13:56:40 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:00.952 00:14:00.952 real 0m2.654s 00:14:00.953 user 0m2.394s 00:14:00.953 sys 0m0.174s 00:14:00.953 13:56:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:00.953 13:56:40 -- common/autotest_common.sh@10 -- # set +x 00:14:00.953 ************************************ 00:14:00.953 END TEST accel_crc32c 00:14:00.953 ************************************ 00:14:00.953 13:56:40 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:14:00.953 13:56:40 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:14:00.953 13:56:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:00.953 13:56:40 -- common/autotest_common.sh@10 -- # set +x 00:14:00.953 ************************************ 00:14:00.953 START TEST accel_crc32c_C2 00:14:00.953 ************************************ 00:14:00.953 13:56:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:14:00.953 13:56:40 -- accel/accel.sh@16 -- # local accel_opc 00:14:00.953 13:56:40 -- accel/accel.sh@17 -- # local accel_module 00:14:00.953 13:56:40 -- accel/accel.sh@19 -- # IFS=: 00:14:00.953 13:56:40 -- accel/accel.sh@19 -- # read -r var val 00:14:00.953 13:56:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:14:00.953 13:56:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:14:00.953 13:56:40 -- accel/accel.sh@12 -- # build_accel_config 00:14:00.953 13:56:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:00.953 13:56:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:00.953 13:56:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:00.953 13:56:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:00.953 13:56:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:00.953 13:56:40 -- accel/accel.sh@40 -- # local IFS=, 00:14:00.953 13:56:40 -- accel/accel.sh@41 -- # jq -r . 00:14:00.953 [2024-04-26 13:56:40.607249] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:00.953 [2024-04-26 13:56:40.607360] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64387 ] 00:14:01.212 [2024-04-26 13:56:40.777149] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.472 [2024-04-26 13:56:41.012215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.731 13:56:41 -- accel/accel.sh@20 -- # val= 00:14:01.731 13:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # IFS=: 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # read -r var val 00:14:01.731 13:56:41 -- accel/accel.sh@20 -- # val= 00:14:01.731 13:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # IFS=: 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # read -r var val 00:14:01.731 13:56:41 -- accel/accel.sh@20 -- # val=0x1 00:14:01.731 13:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # IFS=: 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # read -r var val 00:14:01.731 13:56:41 -- accel/accel.sh@20 -- # val= 00:14:01.731 13:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # IFS=: 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # read -r var val 00:14:01.731 13:56:41 -- accel/accel.sh@20 -- # val= 00:14:01.731 13:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # IFS=: 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # read -r var val 00:14:01.731 13:56:41 -- accel/accel.sh@20 -- # val=crc32c 00:14:01.731 13:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.731 13:56:41 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # IFS=: 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # read -r var val 00:14:01.731 13:56:41 -- accel/accel.sh@20 -- # val=0 00:14:01.731 13:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # IFS=: 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # read -r var val 00:14:01.731 13:56:41 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:01.731 13:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # IFS=: 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # read -r var val 00:14:01.731 13:56:41 -- accel/accel.sh@20 -- # val= 00:14:01.731 13:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # IFS=: 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # read -r var val 00:14:01.731 13:56:41 -- accel/accel.sh@20 -- # val=software 00:14:01.731 13:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.731 13:56:41 -- accel/accel.sh@22 -- # accel_module=software 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # IFS=: 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # read -r var val 00:14:01.731 13:56:41 -- accel/accel.sh@20 -- # val=32 00:14:01.731 13:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # IFS=: 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # read -r var val 00:14:01.731 13:56:41 -- accel/accel.sh@20 -- # val=32 00:14:01.731 13:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # IFS=: 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # read -r var val 00:14:01.731 13:56:41 -- accel/accel.sh@20 -- # val=1 00:14:01.731 13:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # IFS=: 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # read -r var val 00:14:01.731 13:56:41 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:01.731 13:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # IFS=: 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # read -r var val 00:14:01.731 13:56:41 -- accel/accel.sh@20 -- # val=Yes 00:14:01.731 13:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # IFS=: 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # read -r var val 00:14:01.731 13:56:41 -- accel/accel.sh@20 -- # val= 00:14:01.731 13:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # IFS=: 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # read -r var val 00:14:01.731 13:56:41 -- accel/accel.sh@20 -- # val= 00:14:01.731 13:56:41 -- accel/accel.sh@21 -- # case "$var" in 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # IFS=: 00:14:01.731 13:56:41 -- accel/accel.sh@19 -- # read -r var val 00:14:03.636 13:56:43 -- accel/accel.sh@20 -- # val= 00:14:03.636 13:56:43 -- accel/accel.sh@21 -- # case "$var" in 00:14:03.636 13:56:43 -- accel/accel.sh@19 -- # IFS=: 00:14:03.636 13:56:43 -- accel/accel.sh@19 -- # read -r var val 00:14:03.636 13:56:43 -- accel/accel.sh@20 -- # val= 00:14:03.636 13:56:43 -- accel/accel.sh@21 -- # case "$var" in 00:14:03.636 13:56:43 -- accel/accel.sh@19 -- # IFS=: 00:14:03.636 13:56:43 -- accel/accel.sh@19 -- # read -r var val 00:14:03.636 13:56:43 -- accel/accel.sh@20 -- # val= 00:14:03.636 13:56:43 -- accel/accel.sh@21 -- # case "$var" in 00:14:03.636 13:56:43 -- accel/accel.sh@19 -- # IFS=: 00:14:03.636 13:56:43 -- accel/accel.sh@19 -- # read -r var val 00:14:03.636 13:56:43 -- accel/accel.sh@20 -- # val= 00:14:03.636 13:56:43 -- accel/accel.sh@21 -- # case "$var" in 00:14:03.636 13:56:43 -- accel/accel.sh@19 -- # IFS=: 00:14:03.636 13:56:43 -- accel/accel.sh@19 -- # read -r var val 00:14:03.636 13:56:43 -- accel/accel.sh@20 -- # val= 00:14:03.636 13:56:43 -- accel/accel.sh@21 -- # case "$var" in 00:14:03.636 13:56:43 -- accel/accel.sh@19 -- # IFS=: 00:14:03.636 13:56:43 -- accel/accel.sh@19 -- # read -r var val 00:14:03.636 13:56:43 -- accel/accel.sh@20 -- # val= 00:14:03.636 13:56:43 -- accel/accel.sh@21 -- # case "$var" in 00:14:03.636 13:56:43 -- accel/accel.sh@19 -- # IFS=: 00:14:03.636 13:56:43 -- accel/accel.sh@19 -- # read -r var val 00:14:03.636 13:56:43 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:03.636 13:56:43 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:14:03.636 13:56:43 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:03.636 00:14:03.636 real 0m2.663s 00:14:03.636 user 0m2.393s 00:14:03.636 sys 0m0.175s 00:14:03.636 ************************************ 00:14:03.636 END TEST accel_crc32c_C2 00:14:03.636 ************************************ 00:14:03.636 13:56:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:03.636 13:56:43 -- common/autotest_common.sh@10 -- # set +x 00:14:03.636 13:56:43 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:14:03.636 13:56:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:03.636 13:56:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:03.636 13:56:43 -- common/autotest_common.sh@10 -- # set +x 00:14:03.895 ************************************ 00:14:03.895 START TEST accel_copy 00:14:03.895 ************************************ 00:14:03.895 13:56:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:14:03.895 13:56:43 -- accel/accel.sh@16 -- # local accel_opc 00:14:03.895 13:56:43 -- accel/accel.sh@17 -- # local accel_module 00:14:03.895 13:56:43 -- accel/accel.sh@19 -- # IFS=: 00:14:03.895 13:56:43 -- accel/accel.sh@19 -- # read -r var val 00:14:03.895 13:56:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:14:03.895 13:56:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:14:03.895 13:56:43 -- accel/accel.sh@12 -- # build_accel_config 00:14:03.895 13:56:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:03.895 13:56:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:03.895 13:56:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:03.895 13:56:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:03.895 13:56:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:03.895 13:56:43 -- accel/accel.sh@40 -- # local IFS=, 00:14:03.895 13:56:43 -- accel/accel.sh@41 -- # jq -r . 00:14:03.895 [2024-04-26 13:56:43.436373] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:03.895 [2024-04-26 13:56:43.436640] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64437 ] 00:14:04.153 [2024-04-26 13:56:43.612644] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.412 [2024-04-26 13:56:43.850459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.671 13:56:44 -- accel/accel.sh@20 -- # val= 00:14:04.672 13:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # IFS=: 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # read -r var val 00:14:04.672 13:56:44 -- accel/accel.sh@20 -- # val= 00:14:04.672 13:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # IFS=: 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # read -r var val 00:14:04.672 13:56:44 -- accel/accel.sh@20 -- # val=0x1 00:14:04.672 13:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # IFS=: 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # read -r var val 00:14:04.672 13:56:44 -- accel/accel.sh@20 -- # val= 00:14:04.672 13:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # IFS=: 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # read -r var val 00:14:04.672 13:56:44 -- accel/accel.sh@20 -- # val= 00:14:04.672 13:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # IFS=: 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # read -r var val 00:14:04.672 13:56:44 -- accel/accel.sh@20 -- # val=copy 00:14:04.672 13:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:14:04.672 13:56:44 -- accel/accel.sh@23 -- # accel_opc=copy 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # IFS=: 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # read -r var val 00:14:04.672 13:56:44 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:04.672 13:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # IFS=: 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # read -r var val 00:14:04.672 13:56:44 -- accel/accel.sh@20 -- # val= 00:14:04.672 13:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # IFS=: 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # read -r var val 00:14:04.672 13:56:44 -- accel/accel.sh@20 -- # val=software 00:14:04.672 13:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:14:04.672 13:56:44 -- accel/accel.sh@22 -- # accel_module=software 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # IFS=: 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # read -r var val 00:14:04.672 13:56:44 -- accel/accel.sh@20 -- # val=32 00:14:04.672 13:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # IFS=: 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # read -r var val 00:14:04.672 13:56:44 -- accel/accel.sh@20 -- # val=32 00:14:04.672 13:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # IFS=: 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # read -r var val 00:14:04.672 13:56:44 -- accel/accel.sh@20 -- # val=1 00:14:04.672 13:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # IFS=: 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # read -r var val 00:14:04.672 13:56:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:04.672 13:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # IFS=: 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # read -r var val 00:14:04.672 13:56:44 -- accel/accel.sh@20 -- # val=Yes 00:14:04.672 13:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # IFS=: 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # read -r var val 00:14:04.672 13:56:44 -- accel/accel.sh@20 -- # val= 00:14:04.672 13:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # IFS=: 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # read -r var val 00:14:04.672 13:56:44 -- accel/accel.sh@20 -- # val= 00:14:04.672 13:56:44 -- accel/accel.sh@21 -- # case "$var" in 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # IFS=: 00:14:04.672 13:56:44 -- accel/accel.sh@19 -- # read -r var val 00:14:06.578 13:56:46 -- accel/accel.sh@20 -- # val= 00:14:06.578 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:06.578 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:06.578 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:06.578 13:56:46 -- accel/accel.sh@20 -- # val= 00:14:06.578 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:06.578 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:06.578 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:06.578 13:56:46 -- accel/accel.sh@20 -- # val= 00:14:06.578 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:06.578 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:06.578 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:06.579 13:56:46 -- accel/accel.sh@20 -- # val= 00:14:06.579 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:06.579 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:06.579 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:06.579 13:56:46 -- accel/accel.sh@20 -- # val= 00:14:06.579 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:06.579 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:06.579 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:06.579 13:56:46 -- accel/accel.sh@20 -- # val= 00:14:06.579 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:06.579 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:06.579 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:06.579 13:56:46 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:06.579 13:56:46 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:14:06.579 13:56:46 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:06.579 00:14:06.579 real 0m2.709s 00:14:06.579 user 0m2.417s 00:14:06.579 sys 0m0.197s 00:14:06.579 13:56:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:06.579 ************************************ 00:14:06.579 END TEST accel_copy 00:14:06.579 ************************************ 00:14:06.579 13:56:46 -- common/autotest_common.sh@10 -- # set +x 00:14:06.579 13:56:46 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:14:06.579 13:56:46 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:14:06.579 13:56:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:06.579 13:56:46 -- common/autotest_common.sh@10 -- # set +x 00:14:06.579 ************************************ 00:14:06.579 START TEST accel_fill 00:14:06.579 ************************************ 00:14:06.579 13:56:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:14:06.579 13:56:46 -- accel/accel.sh@16 -- # local accel_opc 00:14:06.579 13:56:46 -- accel/accel.sh@17 -- # local accel_module 00:14:06.579 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:06.579 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:06.579 13:56:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:14:06.579 13:56:46 -- accel/accel.sh@12 -- # build_accel_config 00:14:06.579 13:56:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:14:06.579 13:56:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:06.579 13:56:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:06.579 13:56:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:06.579 13:56:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:06.579 13:56:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:06.579 13:56:46 -- accel/accel.sh@40 -- # local IFS=, 00:14:06.579 13:56:46 -- accel/accel.sh@41 -- # jq -r . 00:14:06.838 [2024-04-26 13:56:46.297430] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:06.838 [2024-04-26 13:56:46.297536] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64492 ] 00:14:06.838 [2024-04-26 13:56:46.455987] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.097 [2024-04-26 13:56:46.691116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.357 13:56:46 -- accel/accel.sh@20 -- # val= 00:14:07.357 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:07.357 13:56:46 -- accel/accel.sh@20 -- # val= 00:14:07.357 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:07.357 13:56:46 -- accel/accel.sh@20 -- # val=0x1 00:14:07.357 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:07.357 13:56:46 -- accel/accel.sh@20 -- # val= 00:14:07.357 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:07.357 13:56:46 -- accel/accel.sh@20 -- # val= 00:14:07.357 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:07.357 13:56:46 -- accel/accel.sh@20 -- # val=fill 00:14:07.357 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.357 13:56:46 -- accel/accel.sh@23 -- # accel_opc=fill 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:07.357 13:56:46 -- accel/accel.sh@20 -- # val=0x80 00:14:07.357 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:07.357 13:56:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:07.357 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:07.357 13:56:46 -- accel/accel.sh@20 -- # val= 00:14:07.357 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:07.357 13:56:46 -- accel/accel.sh@20 -- # val=software 00:14:07.357 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.357 13:56:46 -- accel/accel.sh@22 -- # accel_module=software 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:07.357 13:56:46 -- accel/accel.sh@20 -- # val=64 00:14:07.357 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:07.357 13:56:46 -- accel/accel.sh@20 -- # val=64 00:14:07.357 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:07.357 13:56:46 -- accel/accel.sh@20 -- # val=1 00:14:07.357 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:07.357 13:56:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:07.357 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:07.357 13:56:46 -- accel/accel.sh@20 -- # val=Yes 00:14:07.357 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:07.357 13:56:46 -- accel/accel.sh@20 -- # val= 00:14:07.357 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:07.357 13:56:46 -- accel/accel.sh@20 -- # val= 00:14:07.357 13:56:46 -- accel/accel.sh@21 -- # case "$var" in 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # IFS=: 00:14:07.357 13:56:46 -- accel/accel.sh@19 -- # read -r var val 00:14:09.263 13:56:48 -- accel/accel.sh@20 -- # val= 00:14:09.263 13:56:48 -- accel/accel.sh@21 -- # case "$var" in 00:14:09.263 13:56:48 -- accel/accel.sh@19 -- # IFS=: 00:14:09.263 13:56:48 -- accel/accel.sh@19 -- # read -r var val 00:14:09.263 13:56:48 -- accel/accel.sh@20 -- # val= 00:14:09.263 13:56:48 -- accel/accel.sh@21 -- # case "$var" in 00:14:09.263 13:56:48 -- accel/accel.sh@19 -- # IFS=: 00:14:09.263 13:56:48 -- accel/accel.sh@19 -- # read -r var val 00:14:09.263 13:56:48 -- accel/accel.sh@20 -- # val= 00:14:09.263 13:56:48 -- accel/accel.sh@21 -- # case "$var" in 00:14:09.263 13:56:48 -- accel/accel.sh@19 -- # IFS=: 00:14:09.263 13:56:48 -- accel/accel.sh@19 -- # read -r var val 00:14:09.263 13:56:48 -- accel/accel.sh@20 -- # val= 00:14:09.263 13:56:48 -- accel/accel.sh@21 -- # case "$var" in 00:14:09.263 13:56:48 -- accel/accel.sh@19 -- # IFS=: 00:14:09.263 13:56:48 -- accel/accel.sh@19 -- # read -r var val 00:14:09.263 13:56:48 -- accel/accel.sh@20 -- # val= 00:14:09.263 13:56:48 -- accel/accel.sh@21 -- # case "$var" in 00:14:09.263 13:56:48 -- accel/accel.sh@19 -- # IFS=: 00:14:09.263 13:56:48 -- accel/accel.sh@19 -- # read -r var val 00:14:09.263 13:56:48 -- accel/accel.sh@20 -- # val= 00:14:09.263 13:56:48 -- accel/accel.sh@21 -- # case "$var" in 00:14:09.263 13:56:48 -- accel/accel.sh@19 -- # IFS=: 00:14:09.263 13:56:48 -- accel/accel.sh@19 -- # read -r var val 00:14:09.263 13:56:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:09.263 13:56:48 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:14:09.263 13:56:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:09.263 00:14:09.263 real 0m2.659s 00:14:09.263 user 0m2.399s 00:14:09.263 sys 0m0.175s 00:14:09.263 13:56:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:09.263 ************************************ 00:14:09.263 END TEST accel_fill 00:14:09.263 ************************************ 00:14:09.263 13:56:48 -- common/autotest_common.sh@10 -- # set +x 00:14:09.522 13:56:48 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:14:09.522 13:56:48 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:09.522 13:56:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:09.522 13:56:48 -- common/autotest_common.sh@10 -- # set +x 00:14:09.522 ************************************ 00:14:09.522 START TEST accel_copy_crc32c 00:14:09.522 ************************************ 00:14:09.522 13:56:49 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:14:09.522 13:56:49 -- accel/accel.sh@16 -- # local accel_opc 00:14:09.522 13:56:49 -- accel/accel.sh@17 -- # local accel_module 00:14:09.522 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:09.522 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:09.522 13:56:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:14:09.522 13:56:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:14:09.522 13:56:49 -- accel/accel.sh@12 -- # build_accel_config 00:14:09.522 13:56:49 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:09.522 13:56:49 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:09.522 13:56:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:09.522 13:56:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:09.522 13:56:49 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:09.522 13:56:49 -- accel/accel.sh@40 -- # local IFS=, 00:14:09.522 13:56:49 -- accel/accel.sh@41 -- # jq -r . 00:14:09.522 [2024-04-26 13:56:49.114888] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:09.522 [2024-04-26 13:56:49.114998] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64544 ] 00:14:09.780 [2024-04-26 13:56:49.283857] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.039 [2024-04-26 13:56:49.522649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.306 13:56:49 -- accel/accel.sh@20 -- # val= 00:14:10.306 13:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:10.306 13:56:49 -- accel/accel.sh@20 -- # val= 00:14:10.306 13:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:10.306 13:56:49 -- accel/accel.sh@20 -- # val=0x1 00:14:10.306 13:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:10.306 13:56:49 -- accel/accel.sh@20 -- # val= 00:14:10.306 13:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:10.306 13:56:49 -- accel/accel.sh@20 -- # val= 00:14:10.306 13:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:10.306 13:56:49 -- accel/accel.sh@20 -- # val=copy_crc32c 00:14:10.306 13:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:14:10.306 13:56:49 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:10.306 13:56:49 -- accel/accel.sh@20 -- # val=0 00:14:10.306 13:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:10.306 13:56:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:10.306 13:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:10.306 13:56:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:10.306 13:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:10.306 13:56:49 -- accel/accel.sh@20 -- # val= 00:14:10.306 13:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:10.306 13:56:49 -- accel/accel.sh@20 -- # val=software 00:14:10.306 13:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:14:10.306 13:56:49 -- accel/accel.sh@22 -- # accel_module=software 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:10.306 13:56:49 -- accel/accel.sh@20 -- # val=32 00:14:10.306 13:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:10.306 13:56:49 -- accel/accel.sh@20 -- # val=32 00:14:10.306 13:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:10.306 13:56:49 -- accel/accel.sh@20 -- # val=1 00:14:10.306 13:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:10.306 13:56:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:10.306 13:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:10.306 13:56:49 -- accel/accel.sh@20 -- # val=Yes 00:14:10.306 13:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:10.306 13:56:49 -- accel/accel.sh@20 -- # val= 00:14:10.306 13:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:10.306 13:56:49 -- accel/accel.sh@20 -- # val= 00:14:10.306 13:56:49 -- accel/accel.sh@21 -- # case "$var" in 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # IFS=: 00:14:10.306 13:56:49 -- accel/accel.sh@19 -- # read -r var val 00:14:12.236 13:56:51 -- accel/accel.sh@20 -- # val= 00:14:12.236 13:56:51 -- accel/accel.sh@21 -- # case "$var" in 00:14:12.236 13:56:51 -- accel/accel.sh@19 -- # IFS=: 00:14:12.236 13:56:51 -- accel/accel.sh@19 -- # read -r var val 00:14:12.236 13:56:51 -- accel/accel.sh@20 -- # val= 00:14:12.236 13:56:51 -- accel/accel.sh@21 -- # case "$var" in 00:14:12.236 13:56:51 -- accel/accel.sh@19 -- # IFS=: 00:14:12.236 13:56:51 -- accel/accel.sh@19 -- # read -r var val 00:14:12.236 13:56:51 -- accel/accel.sh@20 -- # val= 00:14:12.236 13:56:51 -- accel/accel.sh@21 -- # case "$var" in 00:14:12.236 13:56:51 -- accel/accel.sh@19 -- # IFS=: 00:14:12.236 13:56:51 -- accel/accel.sh@19 -- # read -r var val 00:14:12.236 13:56:51 -- accel/accel.sh@20 -- # val= 00:14:12.236 13:56:51 -- accel/accel.sh@21 -- # case "$var" in 00:14:12.236 13:56:51 -- accel/accel.sh@19 -- # IFS=: 00:14:12.236 13:56:51 -- accel/accel.sh@19 -- # read -r var val 00:14:12.236 13:56:51 -- accel/accel.sh@20 -- # val= 00:14:12.236 13:56:51 -- accel/accel.sh@21 -- # case "$var" in 00:14:12.236 13:56:51 -- accel/accel.sh@19 -- # IFS=: 00:14:12.236 13:56:51 -- accel/accel.sh@19 -- # read -r var val 00:14:12.236 13:56:51 -- accel/accel.sh@20 -- # val= 00:14:12.236 13:56:51 -- accel/accel.sh@21 -- # case "$var" in 00:14:12.236 13:56:51 -- accel/accel.sh@19 -- # IFS=: 00:14:12.236 13:56:51 -- accel/accel.sh@19 -- # read -r var val 00:14:12.236 13:56:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:12.236 13:56:51 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:14:12.236 13:56:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:12.236 00:14:12.236 real 0m2.677s 00:14:12.236 user 0m2.418s 00:14:12.236 sys 0m0.169s 00:14:12.236 13:56:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:12.236 ************************************ 00:14:12.236 END TEST accel_copy_crc32c 00:14:12.236 ************************************ 00:14:12.236 13:56:51 -- common/autotest_common.sh@10 -- # set +x 00:14:12.236 13:56:51 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:14:12.236 13:56:51 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:14:12.236 13:56:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:12.236 13:56:51 -- common/autotest_common.sh@10 -- # set +x 00:14:12.236 ************************************ 00:14:12.236 START TEST accel_copy_crc32c_C2 00:14:12.236 ************************************ 00:14:12.236 13:56:51 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:14:12.236 13:56:51 -- accel/accel.sh@16 -- # local accel_opc 00:14:12.236 13:56:51 -- accel/accel.sh@17 -- # local accel_module 00:14:12.236 13:56:51 -- accel/accel.sh@19 -- # IFS=: 00:14:12.236 13:56:51 -- accel/accel.sh@19 -- # read -r var val 00:14:12.236 13:56:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:14:12.236 13:56:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:14:12.236 13:56:51 -- accel/accel.sh@12 -- # build_accel_config 00:14:12.236 13:56:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:12.236 13:56:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:12.236 13:56:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:12.236 13:56:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:12.236 13:56:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:12.236 13:56:51 -- accel/accel.sh@40 -- # local IFS=, 00:14:12.236 13:56:51 -- accel/accel.sh@41 -- # jq -r . 00:14:12.495 [2024-04-26 13:56:51.937491] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:12.495 [2024-04-26 13:56:51.937794] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64600 ] 00:14:12.495 [2024-04-26 13:56:52.108112] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.754 [2024-04-26 13:56:52.336588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.014 13:56:52 -- accel/accel.sh@20 -- # val= 00:14:13.014 13:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # IFS=: 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # read -r var val 00:14:13.014 13:56:52 -- accel/accel.sh@20 -- # val= 00:14:13.014 13:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # IFS=: 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # read -r var val 00:14:13.014 13:56:52 -- accel/accel.sh@20 -- # val=0x1 00:14:13.014 13:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # IFS=: 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # read -r var val 00:14:13.014 13:56:52 -- accel/accel.sh@20 -- # val= 00:14:13.014 13:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # IFS=: 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # read -r var val 00:14:13.014 13:56:52 -- accel/accel.sh@20 -- # val= 00:14:13.014 13:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # IFS=: 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # read -r var val 00:14:13.014 13:56:52 -- accel/accel.sh@20 -- # val=copy_crc32c 00:14:13.014 13:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:14:13.014 13:56:52 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # IFS=: 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # read -r var val 00:14:13.014 13:56:52 -- accel/accel.sh@20 -- # val=0 00:14:13.014 13:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # IFS=: 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # read -r var val 00:14:13.014 13:56:52 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:13.014 13:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # IFS=: 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # read -r var val 00:14:13.014 13:56:52 -- accel/accel.sh@20 -- # val='8192 bytes' 00:14:13.014 13:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # IFS=: 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # read -r var val 00:14:13.014 13:56:52 -- accel/accel.sh@20 -- # val= 00:14:13.014 13:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # IFS=: 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # read -r var val 00:14:13.014 13:56:52 -- accel/accel.sh@20 -- # val=software 00:14:13.014 13:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:14:13.014 13:56:52 -- accel/accel.sh@22 -- # accel_module=software 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # IFS=: 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # read -r var val 00:14:13.014 13:56:52 -- accel/accel.sh@20 -- # val=32 00:14:13.014 13:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # IFS=: 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # read -r var val 00:14:13.014 13:56:52 -- accel/accel.sh@20 -- # val=32 00:14:13.014 13:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # IFS=: 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # read -r var val 00:14:13.014 13:56:52 -- accel/accel.sh@20 -- # val=1 00:14:13.014 13:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # IFS=: 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # read -r var val 00:14:13.014 13:56:52 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:13.014 13:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # IFS=: 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # read -r var val 00:14:13.014 13:56:52 -- accel/accel.sh@20 -- # val=Yes 00:14:13.014 13:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # IFS=: 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # read -r var val 00:14:13.014 13:56:52 -- accel/accel.sh@20 -- # val= 00:14:13.014 13:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # IFS=: 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # read -r var val 00:14:13.014 13:56:52 -- accel/accel.sh@20 -- # val= 00:14:13.014 13:56:52 -- accel/accel.sh@21 -- # case "$var" in 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # IFS=: 00:14:13.014 13:56:52 -- accel/accel.sh@19 -- # read -r var val 00:14:14.920 13:56:54 -- accel/accel.sh@20 -- # val= 00:14:14.920 13:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:14:14.920 13:56:54 -- accel/accel.sh@19 -- # IFS=: 00:14:14.920 13:56:54 -- accel/accel.sh@19 -- # read -r var val 00:14:14.920 13:56:54 -- accel/accel.sh@20 -- # val= 00:14:14.920 13:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:14:14.920 13:56:54 -- accel/accel.sh@19 -- # IFS=: 00:14:14.920 13:56:54 -- accel/accel.sh@19 -- # read -r var val 00:14:14.920 13:56:54 -- accel/accel.sh@20 -- # val= 00:14:14.920 13:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:14:14.920 13:56:54 -- accel/accel.sh@19 -- # IFS=: 00:14:14.920 13:56:54 -- accel/accel.sh@19 -- # read -r var val 00:14:14.920 13:56:54 -- accel/accel.sh@20 -- # val= 00:14:14.920 13:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:14:14.920 13:56:54 -- accel/accel.sh@19 -- # IFS=: 00:14:14.920 13:56:54 -- accel/accel.sh@19 -- # read -r var val 00:14:14.920 13:56:54 -- accel/accel.sh@20 -- # val= 00:14:14.920 13:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:14:14.920 13:56:54 -- accel/accel.sh@19 -- # IFS=: 00:14:14.920 13:56:54 -- accel/accel.sh@19 -- # read -r var val 00:14:14.920 13:56:54 -- accel/accel.sh@20 -- # val= 00:14:14.920 13:56:54 -- accel/accel.sh@21 -- # case "$var" in 00:14:14.920 13:56:54 -- accel/accel.sh@19 -- # IFS=: 00:14:14.920 13:56:54 -- accel/accel.sh@19 -- # read -r var val 00:14:14.920 13:56:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:14.920 13:56:54 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:14:14.921 13:56:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:14.921 00:14:14.921 real 0m2.664s 00:14:14.921 user 0m2.389s 00:14:14.921 sys 0m0.187s 00:14:14.921 ************************************ 00:14:14.921 END TEST accel_copy_crc32c_C2 00:14:14.921 ************************************ 00:14:14.921 13:56:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:14.921 13:56:54 -- common/autotest_common.sh@10 -- # set +x 00:14:15.180 13:56:54 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:14:15.180 13:56:54 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:15.180 13:56:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:15.180 13:56:54 -- common/autotest_common.sh@10 -- # set +x 00:14:15.180 ************************************ 00:14:15.180 START TEST accel_dualcast 00:14:15.180 ************************************ 00:14:15.180 13:56:54 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:14:15.180 13:56:54 -- accel/accel.sh@16 -- # local accel_opc 00:14:15.180 13:56:54 -- accel/accel.sh@17 -- # local accel_module 00:14:15.180 13:56:54 -- accel/accel.sh@19 -- # IFS=: 00:14:15.180 13:56:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:14:15.180 13:56:54 -- accel/accel.sh@19 -- # read -r var val 00:14:15.180 13:56:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:14:15.180 13:56:54 -- accel/accel.sh@12 -- # build_accel_config 00:14:15.180 13:56:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:15.180 13:56:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:15.180 13:56:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:15.180 13:56:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:15.180 13:56:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:15.180 13:56:54 -- accel/accel.sh@40 -- # local IFS=, 00:14:15.180 13:56:54 -- accel/accel.sh@41 -- # jq -r . 00:14:15.180 [2024-04-26 13:56:54.757292] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:15.180 [2024-04-26 13:56:54.757547] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64645 ] 00:14:15.439 [2024-04-26 13:56:54.927763] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.699 [2024-04-26 13:56:55.161903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.959 13:56:55 -- accel/accel.sh@20 -- # val= 00:14:15.959 13:56:55 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.959 13:56:55 -- accel/accel.sh@19 -- # IFS=: 00:14:15.959 13:56:55 -- accel/accel.sh@19 -- # read -r var val 00:14:15.959 13:56:55 -- accel/accel.sh@20 -- # val= 00:14:15.959 13:56:55 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.959 13:56:55 -- accel/accel.sh@19 -- # IFS=: 00:14:15.959 13:56:55 -- accel/accel.sh@19 -- # read -r var val 00:14:15.959 13:56:55 -- accel/accel.sh@20 -- # val=0x1 00:14:15.959 13:56:55 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.959 13:56:55 -- accel/accel.sh@19 -- # IFS=: 00:14:15.959 13:56:55 -- accel/accel.sh@19 -- # read -r var val 00:14:15.959 13:56:55 -- accel/accel.sh@20 -- # val= 00:14:15.959 13:56:55 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.959 13:56:55 -- accel/accel.sh@19 -- # IFS=: 00:14:15.959 13:56:55 -- accel/accel.sh@19 -- # read -r var val 00:14:15.959 13:56:55 -- accel/accel.sh@20 -- # val= 00:14:15.959 13:56:55 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.959 13:56:55 -- accel/accel.sh@19 -- # IFS=: 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # read -r var val 00:14:15.960 13:56:55 -- accel/accel.sh@20 -- # val=dualcast 00:14:15.960 13:56:55 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.960 13:56:55 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # IFS=: 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # read -r var val 00:14:15.960 13:56:55 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:15.960 13:56:55 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # IFS=: 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # read -r var val 00:14:15.960 13:56:55 -- accel/accel.sh@20 -- # val= 00:14:15.960 13:56:55 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # IFS=: 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # read -r var val 00:14:15.960 13:56:55 -- accel/accel.sh@20 -- # val=software 00:14:15.960 13:56:55 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.960 13:56:55 -- accel/accel.sh@22 -- # accel_module=software 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # IFS=: 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # read -r var val 00:14:15.960 13:56:55 -- accel/accel.sh@20 -- # val=32 00:14:15.960 13:56:55 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # IFS=: 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # read -r var val 00:14:15.960 13:56:55 -- accel/accel.sh@20 -- # val=32 00:14:15.960 13:56:55 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # IFS=: 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # read -r var val 00:14:15.960 13:56:55 -- accel/accel.sh@20 -- # val=1 00:14:15.960 13:56:55 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # IFS=: 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # read -r var val 00:14:15.960 13:56:55 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:15.960 13:56:55 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # IFS=: 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # read -r var val 00:14:15.960 13:56:55 -- accel/accel.sh@20 -- # val=Yes 00:14:15.960 13:56:55 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # IFS=: 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # read -r var val 00:14:15.960 13:56:55 -- accel/accel.sh@20 -- # val= 00:14:15.960 13:56:55 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # IFS=: 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # read -r var val 00:14:15.960 13:56:55 -- accel/accel.sh@20 -- # val= 00:14:15.960 13:56:55 -- accel/accel.sh@21 -- # case "$var" in 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # IFS=: 00:14:15.960 13:56:55 -- accel/accel.sh@19 -- # read -r var val 00:14:17.881 13:56:57 -- accel/accel.sh@20 -- # val= 00:14:17.881 13:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:14:17.881 13:56:57 -- accel/accel.sh@19 -- # IFS=: 00:14:17.881 13:56:57 -- accel/accel.sh@19 -- # read -r var val 00:14:17.881 13:56:57 -- accel/accel.sh@20 -- # val= 00:14:17.881 13:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:14:17.881 13:56:57 -- accel/accel.sh@19 -- # IFS=: 00:14:17.881 13:56:57 -- accel/accel.sh@19 -- # read -r var val 00:14:17.881 13:56:57 -- accel/accel.sh@20 -- # val= 00:14:17.881 13:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:14:17.881 13:56:57 -- accel/accel.sh@19 -- # IFS=: 00:14:17.881 13:56:57 -- accel/accel.sh@19 -- # read -r var val 00:14:17.881 13:56:57 -- accel/accel.sh@20 -- # val= 00:14:17.881 13:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:14:17.881 13:56:57 -- accel/accel.sh@19 -- # IFS=: 00:14:17.881 13:56:57 -- accel/accel.sh@19 -- # read -r var val 00:14:17.881 13:56:57 -- accel/accel.sh@20 -- # val= 00:14:17.881 13:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:14:17.881 13:56:57 -- accel/accel.sh@19 -- # IFS=: 00:14:17.881 13:56:57 -- accel/accel.sh@19 -- # read -r var val 00:14:17.881 13:56:57 -- accel/accel.sh@20 -- # val= 00:14:17.881 13:56:57 -- accel/accel.sh@21 -- # case "$var" in 00:14:17.881 13:56:57 -- accel/accel.sh@19 -- # IFS=: 00:14:17.881 13:56:57 -- accel/accel.sh@19 -- # read -r var val 00:14:17.881 13:56:57 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:17.881 13:56:57 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:14:17.881 13:56:57 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:17.881 ************************************ 00:14:17.881 END TEST accel_dualcast 00:14:17.881 ************************************ 00:14:17.881 00:14:17.881 real 0m2.688s 00:14:17.881 user 0m2.404s 00:14:17.881 sys 0m0.192s 00:14:17.881 13:56:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:17.881 13:56:57 -- common/autotest_common.sh@10 -- # set +x 00:14:17.881 13:56:57 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:14:17.881 13:56:57 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:17.881 13:56:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:17.881 13:56:57 -- common/autotest_common.sh@10 -- # set +x 00:14:17.881 ************************************ 00:14:17.881 START TEST accel_compare 00:14:17.881 ************************************ 00:14:17.881 13:56:57 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:14:17.881 13:56:57 -- accel/accel.sh@16 -- # local accel_opc 00:14:17.881 13:56:57 -- accel/accel.sh@17 -- # local accel_module 00:14:17.881 13:56:57 -- accel/accel.sh@19 -- # IFS=: 00:14:18.140 13:56:57 -- accel/accel.sh@19 -- # read -r var val 00:14:18.140 13:56:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:14:18.140 13:56:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:14:18.140 13:56:57 -- accel/accel.sh@12 -- # build_accel_config 00:14:18.140 13:56:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:18.141 13:56:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:18.141 13:56:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:18.141 13:56:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:18.141 13:56:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:18.141 13:56:57 -- accel/accel.sh@40 -- # local IFS=, 00:14:18.141 13:56:57 -- accel/accel.sh@41 -- # jq -r . 00:14:18.141 [2024-04-26 13:56:57.607520] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:18.141 [2024-04-26 13:56:57.607640] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64701 ] 00:14:18.141 [2024-04-26 13:56:57.788265] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.400 [2024-04-26 13:56:58.021244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.658 13:56:58 -- accel/accel.sh@20 -- # val= 00:14:18.658 13:56:58 -- accel/accel.sh@21 -- # case "$var" in 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # IFS=: 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # read -r var val 00:14:18.658 13:56:58 -- accel/accel.sh@20 -- # val= 00:14:18.658 13:56:58 -- accel/accel.sh@21 -- # case "$var" in 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # IFS=: 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # read -r var val 00:14:18.658 13:56:58 -- accel/accel.sh@20 -- # val=0x1 00:14:18.658 13:56:58 -- accel/accel.sh@21 -- # case "$var" in 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # IFS=: 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # read -r var val 00:14:18.658 13:56:58 -- accel/accel.sh@20 -- # val= 00:14:18.658 13:56:58 -- accel/accel.sh@21 -- # case "$var" in 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # IFS=: 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # read -r var val 00:14:18.658 13:56:58 -- accel/accel.sh@20 -- # val= 00:14:18.658 13:56:58 -- accel/accel.sh@21 -- # case "$var" in 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # IFS=: 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # read -r var val 00:14:18.658 13:56:58 -- accel/accel.sh@20 -- # val=compare 00:14:18.658 13:56:58 -- accel/accel.sh@21 -- # case "$var" in 00:14:18.658 13:56:58 -- accel/accel.sh@23 -- # accel_opc=compare 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # IFS=: 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # read -r var val 00:14:18.658 13:56:58 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:18.658 13:56:58 -- accel/accel.sh@21 -- # case "$var" in 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # IFS=: 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # read -r var val 00:14:18.658 13:56:58 -- accel/accel.sh@20 -- # val= 00:14:18.658 13:56:58 -- accel/accel.sh@21 -- # case "$var" in 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # IFS=: 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # read -r var val 00:14:18.658 13:56:58 -- accel/accel.sh@20 -- # val=software 00:14:18.658 13:56:58 -- accel/accel.sh@21 -- # case "$var" in 00:14:18.658 13:56:58 -- accel/accel.sh@22 -- # accel_module=software 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # IFS=: 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # read -r var val 00:14:18.658 13:56:58 -- accel/accel.sh@20 -- # val=32 00:14:18.658 13:56:58 -- accel/accel.sh@21 -- # case "$var" in 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # IFS=: 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # read -r var val 00:14:18.658 13:56:58 -- accel/accel.sh@20 -- # val=32 00:14:18.658 13:56:58 -- accel/accel.sh@21 -- # case "$var" in 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # IFS=: 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # read -r var val 00:14:18.658 13:56:58 -- accel/accel.sh@20 -- # val=1 00:14:18.658 13:56:58 -- accel/accel.sh@21 -- # case "$var" in 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # IFS=: 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # read -r var val 00:14:18.658 13:56:58 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:18.658 13:56:58 -- accel/accel.sh@21 -- # case "$var" in 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # IFS=: 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # read -r var val 00:14:18.658 13:56:58 -- accel/accel.sh@20 -- # val=Yes 00:14:18.658 13:56:58 -- accel/accel.sh@21 -- # case "$var" in 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # IFS=: 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # read -r var val 00:14:18.658 13:56:58 -- accel/accel.sh@20 -- # val= 00:14:18.658 13:56:58 -- accel/accel.sh@21 -- # case "$var" in 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # IFS=: 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # read -r var val 00:14:18.658 13:56:58 -- accel/accel.sh@20 -- # val= 00:14:18.658 13:56:58 -- accel/accel.sh@21 -- # case "$var" in 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # IFS=: 00:14:18.658 13:56:58 -- accel/accel.sh@19 -- # read -r var val 00:14:21.194 13:57:00 -- accel/accel.sh@20 -- # val= 00:14:21.194 13:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.194 13:57:00 -- accel/accel.sh@19 -- # IFS=: 00:14:21.194 13:57:00 -- accel/accel.sh@19 -- # read -r var val 00:14:21.194 13:57:00 -- accel/accel.sh@20 -- # val= 00:14:21.194 13:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.194 13:57:00 -- accel/accel.sh@19 -- # IFS=: 00:14:21.194 13:57:00 -- accel/accel.sh@19 -- # read -r var val 00:14:21.194 13:57:00 -- accel/accel.sh@20 -- # val= 00:14:21.194 13:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.194 13:57:00 -- accel/accel.sh@19 -- # IFS=: 00:14:21.194 13:57:00 -- accel/accel.sh@19 -- # read -r var val 00:14:21.194 13:57:00 -- accel/accel.sh@20 -- # val= 00:14:21.194 13:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.194 13:57:00 -- accel/accel.sh@19 -- # IFS=: 00:14:21.194 13:57:00 -- accel/accel.sh@19 -- # read -r var val 00:14:21.194 13:57:00 -- accel/accel.sh@20 -- # val= 00:14:21.194 13:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.194 13:57:00 -- accel/accel.sh@19 -- # IFS=: 00:14:21.194 13:57:00 -- accel/accel.sh@19 -- # read -r var val 00:14:21.194 13:57:00 -- accel/accel.sh@20 -- # val= 00:14:21.194 13:57:00 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.194 13:57:00 -- accel/accel.sh@19 -- # IFS=: 00:14:21.194 13:57:00 -- accel/accel.sh@19 -- # read -r var val 00:14:21.194 13:57:00 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:21.194 13:57:00 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:14:21.194 13:57:00 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:21.194 00:14:21.194 real 0m2.716s 00:14:21.194 user 0m2.429s 00:14:21.194 sys 0m0.195s 00:14:21.194 13:57:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:21.194 ************************************ 00:14:21.194 END TEST accel_compare 00:14:21.194 ************************************ 00:14:21.194 13:57:00 -- common/autotest_common.sh@10 -- # set +x 00:14:21.194 13:57:00 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:14:21.194 13:57:00 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:21.194 13:57:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:21.194 13:57:00 -- common/autotest_common.sh@10 -- # set +x 00:14:21.194 ************************************ 00:14:21.194 START TEST accel_xor 00:14:21.194 ************************************ 00:14:21.194 13:57:00 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:14:21.194 13:57:00 -- accel/accel.sh@16 -- # local accel_opc 00:14:21.194 13:57:00 -- accel/accel.sh@17 -- # local accel_module 00:14:21.194 13:57:00 -- accel/accel.sh@19 -- # IFS=: 00:14:21.194 13:57:00 -- accel/accel.sh@19 -- # read -r var val 00:14:21.194 13:57:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:14:21.194 13:57:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:14:21.194 13:57:00 -- accel/accel.sh@12 -- # build_accel_config 00:14:21.194 13:57:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:21.194 13:57:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:21.194 13:57:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:21.194 13:57:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:21.194 13:57:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:21.194 13:57:00 -- accel/accel.sh@40 -- # local IFS=, 00:14:21.194 13:57:00 -- accel/accel.sh@41 -- # jq -r . 00:14:21.194 [2024-04-26 13:57:00.474019] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:21.194 [2024-04-26 13:57:00.474138] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64758 ] 00:14:21.194 [2024-04-26 13:57:00.643241] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.453 [2024-04-26 13:57:00.883188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.714 13:57:01 -- accel/accel.sh@20 -- # val= 00:14:21.714 13:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.714 13:57:01 -- accel/accel.sh@19 -- # IFS=: 00:14:21.714 13:57:01 -- accel/accel.sh@19 -- # read -r var val 00:14:21.714 13:57:01 -- accel/accel.sh@20 -- # val= 00:14:21.714 13:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.714 13:57:01 -- accel/accel.sh@19 -- # IFS=: 00:14:21.714 13:57:01 -- accel/accel.sh@19 -- # read -r var val 00:14:21.714 13:57:01 -- accel/accel.sh@20 -- # val=0x1 00:14:21.714 13:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.714 13:57:01 -- accel/accel.sh@19 -- # IFS=: 00:14:21.714 13:57:01 -- accel/accel.sh@19 -- # read -r var val 00:14:21.714 13:57:01 -- accel/accel.sh@20 -- # val= 00:14:21.714 13:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.714 13:57:01 -- accel/accel.sh@19 -- # IFS=: 00:14:21.714 13:57:01 -- accel/accel.sh@19 -- # read -r var val 00:14:21.714 13:57:01 -- accel/accel.sh@20 -- # val= 00:14:21.714 13:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.714 13:57:01 -- accel/accel.sh@19 -- # IFS=: 00:14:21.714 13:57:01 -- accel/accel.sh@19 -- # read -r var val 00:14:21.714 13:57:01 -- accel/accel.sh@20 -- # val=xor 00:14:21.714 13:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.715 13:57:01 -- accel/accel.sh@23 -- # accel_opc=xor 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # IFS=: 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # read -r var val 00:14:21.715 13:57:01 -- accel/accel.sh@20 -- # val=2 00:14:21.715 13:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # IFS=: 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # read -r var val 00:14:21.715 13:57:01 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:21.715 13:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # IFS=: 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # read -r var val 00:14:21.715 13:57:01 -- accel/accel.sh@20 -- # val= 00:14:21.715 13:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # IFS=: 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # read -r var val 00:14:21.715 13:57:01 -- accel/accel.sh@20 -- # val=software 00:14:21.715 13:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.715 13:57:01 -- accel/accel.sh@22 -- # accel_module=software 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # IFS=: 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # read -r var val 00:14:21.715 13:57:01 -- accel/accel.sh@20 -- # val=32 00:14:21.715 13:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # IFS=: 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # read -r var val 00:14:21.715 13:57:01 -- accel/accel.sh@20 -- # val=32 00:14:21.715 13:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # IFS=: 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # read -r var val 00:14:21.715 13:57:01 -- accel/accel.sh@20 -- # val=1 00:14:21.715 13:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # IFS=: 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # read -r var val 00:14:21.715 13:57:01 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:21.715 13:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # IFS=: 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # read -r var val 00:14:21.715 13:57:01 -- accel/accel.sh@20 -- # val=Yes 00:14:21.715 13:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # IFS=: 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # read -r var val 00:14:21.715 13:57:01 -- accel/accel.sh@20 -- # val= 00:14:21.715 13:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # IFS=: 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # read -r var val 00:14:21.715 13:57:01 -- accel/accel.sh@20 -- # val= 00:14:21.715 13:57:01 -- accel/accel.sh@21 -- # case "$var" in 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # IFS=: 00:14:21.715 13:57:01 -- accel/accel.sh@19 -- # read -r var val 00:14:23.621 13:57:03 -- accel/accel.sh@20 -- # val= 00:14:23.621 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:23.621 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:23.621 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:23.621 13:57:03 -- accel/accel.sh@20 -- # val= 00:14:23.621 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:23.621 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:23.621 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:23.621 13:57:03 -- accel/accel.sh@20 -- # val= 00:14:23.621 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:23.621 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:23.621 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:23.621 13:57:03 -- accel/accel.sh@20 -- # val= 00:14:23.621 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:23.621 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:23.621 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:23.621 13:57:03 -- accel/accel.sh@20 -- # val= 00:14:23.621 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:23.621 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:23.621 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:23.621 13:57:03 -- accel/accel.sh@20 -- # val= 00:14:23.621 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:23.621 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:23.621 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:23.621 13:57:03 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:23.621 13:57:03 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:14:23.621 13:57:03 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:23.621 00:14:23.621 real 0m2.693s 00:14:23.621 user 0m2.423s 00:14:23.621 sys 0m0.179s 00:14:23.621 13:57:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:23.621 ************************************ 00:14:23.621 END TEST accel_xor 00:14:23.621 ************************************ 00:14:23.621 13:57:03 -- common/autotest_common.sh@10 -- # set +x 00:14:23.621 13:57:03 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:14:23.621 13:57:03 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:14:23.621 13:57:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:23.621 13:57:03 -- common/autotest_common.sh@10 -- # set +x 00:14:23.621 ************************************ 00:14:23.621 START TEST accel_xor 00:14:23.621 ************************************ 00:14:23.621 13:57:03 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:14:23.621 13:57:03 -- accel/accel.sh@16 -- # local accel_opc 00:14:23.621 13:57:03 -- accel/accel.sh@17 -- # local accel_module 00:14:23.621 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:23.621 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:23.621 13:57:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:14:23.621 13:57:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:14:23.621 13:57:03 -- accel/accel.sh@12 -- # build_accel_config 00:14:23.621 13:57:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:23.621 13:57:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:23.621 13:57:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:23.621 13:57:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:23.621 13:57:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:23.621 13:57:03 -- accel/accel.sh@40 -- # local IFS=, 00:14:23.621 13:57:03 -- accel/accel.sh@41 -- # jq -r . 00:14:23.879 [2024-04-26 13:57:03.325977] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:23.880 [2024-04-26 13:57:03.326098] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64809 ] 00:14:23.880 [2024-04-26 13:57:03.495543] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.138 [2024-04-26 13:57:03.728414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.397 13:57:03 -- accel/accel.sh@20 -- # val= 00:14:24.397 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:24.397 13:57:03 -- accel/accel.sh@20 -- # val= 00:14:24.397 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:24.397 13:57:03 -- accel/accel.sh@20 -- # val=0x1 00:14:24.397 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:24.397 13:57:03 -- accel/accel.sh@20 -- # val= 00:14:24.397 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:24.397 13:57:03 -- accel/accel.sh@20 -- # val= 00:14:24.397 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:24.397 13:57:03 -- accel/accel.sh@20 -- # val=xor 00:14:24.397 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:24.397 13:57:03 -- accel/accel.sh@23 -- # accel_opc=xor 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:24.397 13:57:03 -- accel/accel.sh@20 -- # val=3 00:14:24.397 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:24.397 13:57:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:24.397 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:24.397 13:57:03 -- accel/accel.sh@20 -- # val= 00:14:24.397 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:24.397 13:57:03 -- accel/accel.sh@20 -- # val=software 00:14:24.397 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:24.397 13:57:03 -- accel/accel.sh@22 -- # accel_module=software 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:24.397 13:57:03 -- accel/accel.sh@20 -- # val=32 00:14:24.397 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:24.397 13:57:03 -- accel/accel.sh@20 -- # val=32 00:14:24.397 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:24.397 13:57:03 -- accel/accel.sh@20 -- # val=1 00:14:24.397 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:24.397 13:57:03 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:24.397 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:24.397 13:57:03 -- accel/accel.sh@20 -- # val=Yes 00:14:24.397 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:24.397 13:57:03 -- accel/accel.sh@20 -- # val= 00:14:24.397 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:24.397 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:24.398 13:57:03 -- accel/accel.sh@20 -- # val= 00:14:24.398 13:57:03 -- accel/accel.sh@21 -- # case "$var" in 00:14:24.398 13:57:03 -- accel/accel.sh@19 -- # IFS=: 00:14:24.398 13:57:03 -- accel/accel.sh@19 -- # read -r var val 00:14:26.303 13:57:05 -- accel/accel.sh@20 -- # val= 00:14:26.303 13:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:26.303 13:57:05 -- accel/accel.sh@19 -- # IFS=: 00:14:26.303 13:57:05 -- accel/accel.sh@19 -- # read -r var val 00:14:26.303 13:57:05 -- accel/accel.sh@20 -- # val= 00:14:26.303 13:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:26.303 13:57:05 -- accel/accel.sh@19 -- # IFS=: 00:14:26.303 13:57:05 -- accel/accel.sh@19 -- # read -r var val 00:14:26.303 13:57:05 -- accel/accel.sh@20 -- # val= 00:14:26.303 13:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:26.303 13:57:05 -- accel/accel.sh@19 -- # IFS=: 00:14:26.303 13:57:05 -- accel/accel.sh@19 -- # read -r var val 00:14:26.303 13:57:05 -- accel/accel.sh@20 -- # val= 00:14:26.303 13:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:26.303 13:57:05 -- accel/accel.sh@19 -- # IFS=: 00:14:26.303 13:57:05 -- accel/accel.sh@19 -- # read -r var val 00:14:26.303 13:57:05 -- accel/accel.sh@20 -- # val= 00:14:26.303 13:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:26.303 13:57:05 -- accel/accel.sh@19 -- # IFS=: 00:14:26.303 13:57:05 -- accel/accel.sh@19 -- # read -r var val 00:14:26.303 13:57:05 -- accel/accel.sh@20 -- # val= 00:14:26.303 13:57:05 -- accel/accel.sh@21 -- # case "$var" in 00:14:26.303 13:57:05 -- accel/accel.sh@19 -- # IFS=: 00:14:26.303 13:57:05 -- accel/accel.sh@19 -- # read -r var val 00:14:26.303 13:57:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:26.303 13:57:05 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:14:26.303 13:57:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:26.303 00:14:26.303 real 0m2.641s 00:14:26.303 user 0m2.362s 00:14:26.303 sys 0m0.186s 00:14:26.303 13:57:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:26.303 ************************************ 00:14:26.303 END TEST accel_xor 00:14:26.303 ************************************ 00:14:26.303 13:57:05 -- common/autotest_common.sh@10 -- # set +x 00:14:26.303 13:57:05 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:14:26.303 13:57:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:14:26.303 13:57:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:26.303 13:57:05 -- common/autotest_common.sh@10 -- # set +x 00:14:26.563 ************************************ 00:14:26.563 START TEST accel_dif_verify 00:14:26.563 ************************************ 00:14:26.563 13:57:06 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:14:26.563 13:57:06 -- accel/accel.sh@16 -- # local accel_opc 00:14:26.563 13:57:06 -- accel/accel.sh@17 -- # local accel_module 00:14:26.563 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:26.563 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:26.563 13:57:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:14:26.563 13:57:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:14:26.563 13:57:06 -- accel/accel.sh@12 -- # build_accel_config 00:14:26.563 13:57:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:26.563 13:57:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:26.563 13:57:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:26.563 13:57:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:26.563 13:57:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:26.563 13:57:06 -- accel/accel.sh@40 -- # local IFS=, 00:14:26.563 13:57:06 -- accel/accel.sh@41 -- # jq -r . 00:14:26.563 [2024-04-26 13:57:06.116553] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:26.563 [2024-04-26 13:57:06.116665] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64860 ] 00:14:26.823 [2024-04-26 13:57:06.284863] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.082 [2024-04-26 13:57:06.514106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val= 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val= 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val=0x1 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val= 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val= 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val=dif_verify 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val='512 bytes' 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val='8 bytes' 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val= 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val=software 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@22 -- # accel_module=software 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val=32 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val=32 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val=1 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val=No 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val= 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:27.342 13:57:06 -- accel/accel.sh@20 -- # val= 00:14:27.342 13:57:06 -- accel/accel.sh@21 -- # case "$var" in 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # IFS=: 00:14:27.342 13:57:06 -- accel/accel.sh@19 -- # read -r var val 00:14:29.250 13:57:08 -- accel/accel.sh@20 -- # val= 00:14:29.250 13:57:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:29.250 13:57:08 -- accel/accel.sh@19 -- # IFS=: 00:14:29.250 13:57:08 -- accel/accel.sh@19 -- # read -r var val 00:14:29.250 13:57:08 -- accel/accel.sh@20 -- # val= 00:14:29.250 13:57:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:29.250 13:57:08 -- accel/accel.sh@19 -- # IFS=: 00:14:29.250 13:57:08 -- accel/accel.sh@19 -- # read -r var val 00:14:29.250 13:57:08 -- accel/accel.sh@20 -- # val= 00:14:29.250 13:57:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:29.250 13:57:08 -- accel/accel.sh@19 -- # IFS=: 00:14:29.250 13:57:08 -- accel/accel.sh@19 -- # read -r var val 00:14:29.250 13:57:08 -- accel/accel.sh@20 -- # val= 00:14:29.250 13:57:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:29.250 13:57:08 -- accel/accel.sh@19 -- # IFS=: 00:14:29.250 13:57:08 -- accel/accel.sh@19 -- # read -r var val 00:14:29.250 13:57:08 -- accel/accel.sh@20 -- # val= 00:14:29.250 13:57:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:29.250 13:57:08 -- accel/accel.sh@19 -- # IFS=: 00:14:29.250 13:57:08 -- accel/accel.sh@19 -- # read -r var val 00:14:29.250 13:57:08 -- accel/accel.sh@20 -- # val= 00:14:29.250 13:57:08 -- accel/accel.sh@21 -- # case "$var" in 00:14:29.250 13:57:08 -- accel/accel.sh@19 -- # IFS=: 00:14:29.250 13:57:08 -- accel/accel.sh@19 -- # read -r var val 00:14:29.250 13:57:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:29.250 13:57:08 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:14:29.250 13:57:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:29.250 00:14:29.250 real 0m2.681s 00:14:29.250 user 0m2.402s 00:14:29.250 sys 0m0.184s 00:14:29.250 13:57:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:29.250 13:57:08 -- common/autotest_common.sh@10 -- # set +x 00:14:29.250 ************************************ 00:14:29.250 END TEST accel_dif_verify 00:14:29.250 ************************************ 00:14:29.250 13:57:08 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:14:29.250 13:57:08 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:14:29.250 13:57:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:29.250 13:57:08 -- common/autotest_common.sh@10 -- # set +x 00:14:29.250 ************************************ 00:14:29.250 START TEST accel_dif_generate 00:14:29.250 ************************************ 00:14:29.250 13:57:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:14:29.250 13:57:08 -- accel/accel.sh@16 -- # local accel_opc 00:14:29.250 13:57:08 -- accel/accel.sh@17 -- # local accel_module 00:14:29.250 13:57:08 -- accel/accel.sh@19 -- # IFS=: 00:14:29.250 13:57:08 -- accel/accel.sh@19 -- # read -r var val 00:14:29.250 13:57:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:14:29.250 13:57:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:14:29.250 13:57:08 -- accel/accel.sh@12 -- # build_accel_config 00:14:29.250 13:57:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:29.250 13:57:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:29.250 13:57:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:29.250 13:57:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:29.250 13:57:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:29.250 13:57:08 -- accel/accel.sh@40 -- # local IFS=, 00:14:29.250 13:57:08 -- accel/accel.sh@41 -- # jq -r . 00:14:29.513 [2024-04-26 13:57:08.950479] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:29.513 [2024-04-26 13:57:08.950622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64916 ] 00:14:29.513 [2024-04-26 13:57:09.120738] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.772 [2024-04-26 13:57:09.353534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.030 13:57:09 -- accel/accel.sh@20 -- # val= 00:14:30.030 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:30.030 13:57:09 -- accel/accel.sh@20 -- # val= 00:14:30.030 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:30.030 13:57:09 -- accel/accel.sh@20 -- # val=0x1 00:14:30.030 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:30.030 13:57:09 -- accel/accel.sh@20 -- # val= 00:14:30.030 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:30.030 13:57:09 -- accel/accel.sh@20 -- # val= 00:14:30.030 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:30.030 13:57:09 -- accel/accel.sh@20 -- # val=dif_generate 00:14:30.030 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.030 13:57:09 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:30.030 13:57:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:30.030 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:30.030 13:57:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:30.030 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:30.030 13:57:09 -- accel/accel.sh@20 -- # val='512 bytes' 00:14:30.030 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:30.030 13:57:09 -- accel/accel.sh@20 -- # val='8 bytes' 00:14:30.030 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:30.030 13:57:09 -- accel/accel.sh@20 -- # val= 00:14:30.030 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:30.030 13:57:09 -- accel/accel.sh@20 -- # val=software 00:14:30.030 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.030 13:57:09 -- accel/accel.sh@22 -- # accel_module=software 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:30.030 13:57:09 -- accel/accel.sh@20 -- # val=32 00:14:30.030 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:30.030 13:57:09 -- accel/accel.sh@20 -- # val=32 00:14:30.030 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:30.030 13:57:09 -- accel/accel.sh@20 -- # val=1 00:14:30.030 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.030 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:30.031 13:57:09 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:30.031 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.031 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.031 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:30.031 13:57:09 -- accel/accel.sh@20 -- # val=No 00:14:30.031 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.031 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.031 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:30.031 13:57:09 -- accel/accel.sh@20 -- # val= 00:14:30.031 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.031 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.031 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:30.031 13:57:09 -- accel/accel.sh@20 -- # val= 00:14:30.031 13:57:09 -- accel/accel.sh@21 -- # case "$var" in 00:14:30.031 13:57:09 -- accel/accel.sh@19 -- # IFS=: 00:14:30.031 13:57:09 -- accel/accel.sh@19 -- # read -r var val 00:14:31.934 13:57:11 -- accel/accel.sh@20 -- # val= 00:14:31.934 13:57:11 -- accel/accel.sh@21 -- # case "$var" in 00:14:31.934 13:57:11 -- accel/accel.sh@19 -- # IFS=: 00:14:31.934 13:57:11 -- accel/accel.sh@19 -- # read -r var val 00:14:31.934 13:57:11 -- accel/accel.sh@20 -- # val= 00:14:31.934 13:57:11 -- accel/accel.sh@21 -- # case "$var" in 00:14:31.934 13:57:11 -- accel/accel.sh@19 -- # IFS=: 00:14:31.934 13:57:11 -- accel/accel.sh@19 -- # read -r var val 00:14:31.934 13:57:11 -- accel/accel.sh@20 -- # val= 00:14:31.934 13:57:11 -- accel/accel.sh@21 -- # case "$var" in 00:14:31.934 13:57:11 -- accel/accel.sh@19 -- # IFS=: 00:14:31.934 13:57:11 -- accel/accel.sh@19 -- # read -r var val 00:14:31.934 13:57:11 -- accel/accel.sh@20 -- # val= 00:14:31.934 13:57:11 -- accel/accel.sh@21 -- # case "$var" in 00:14:31.934 13:57:11 -- accel/accel.sh@19 -- # IFS=: 00:14:31.934 13:57:11 -- accel/accel.sh@19 -- # read -r var val 00:14:31.934 13:57:11 -- accel/accel.sh@20 -- # val= 00:14:31.934 13:57:11 -- accel/accel.sh@21 -- # case "$var" in 00:14:31.934 13:57:11 -- accel/accel.sh@19 -- # IFS=: 00:14:31.934 13:57:11 -- accel/accel.sh@19 -- # read -r var val 00:14:31.934 13:57:11 -- accel/accel.sh@20 -- # val= 00:14:31.934 13:57:11 -- accel/accel.sh@21 -- # case "$var" in 00:14:31.934 13:57:11 -- accel/accel.sh@19 -- # IFS=: 00:14:31.934 13:57:11 -- accel/accel.sh@19 -- # read -r var val 00:14:31.934 13:57:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:31.934 13:57:11 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:14:31.934 13:57:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:31.934 00:14:31.934 real 0m2.676s 00:14:31.934 user 0m2.412s 00:14:31.934 sys 0m0.176s 00:14:31.934 13:57:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:31.934 ************************************ 00:14:31.934 13:57:11 -- common/autotest_common.sh@10 -- # set +x 00:14:31.934 END TEST accel_dif_generate 00:14:31.934 ************************************ 00:14:32.198 13:57:11 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:14:32.198 13:57:11 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:14:32.198 13:57:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:32.198 13:57:11 -- common/autotest_common.sh@10 -- # set +x 00:14:32.198 ************************************ 00:14:32.198 START TEST accel_dif_generate_copy 00:14:32.198 ************************************ 00:14:32.198 13:57:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:14:32.198 13:57:11 -- accel/accel.sh@16 -- # local accel_opc 00:14:32.198 13:57:11 -- accel/accel.sh@17 -- # local accel_module 00:14:32.198 13:57:11 -- accel/accel.sh@19 -- # IFS=: 00:14:32.198 13:57:11 -- accel/accel.sh@19 -- # read -r var val 00:14:32.198 13:57:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:14:32.198 13:57:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:14:32.198 13:57:11 -- accel/accel.sh@12 -- # build_accel_config 00:14:32.198 13:57:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:32.198 13:57:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:32.198 13:57:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:32.198 13:57:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:32.198 13:57:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:32.198 13:57:11 -- accel/accel.sh@40 -- # local IFS=, 00:14:32.198 13:57:11 -- accel/accel.sh@41 -- # jq -r . 00:14:32.198 [2024-04-26 13:57:11.774387] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:32.198 [2024-04-26 13:57:11.774506] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64968 ] 00:14:32.457 [2024-04-26 13:57:11.944913] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.715 [2024-04-26 13:57:12.180830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.974 13:57:12 -- accel/accel.sh@20 -- # val= 00:14:32.974 13:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # IFS=: 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # read -r var val 00:14:32.974 13:57:12 -- accel/accel.sh@20 -- # val= 00:14:32.974 13:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # IFS=: 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # read -r var val 00:14:32.974 13:57:12 -- accel/accel.sh@20 -- # val=0x1 00:14:32.974 13:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # IFS=: 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # read -r var val 00:14:32.974 13:57:12 -- accel/accel.sh@20 -- # val= 00:14:32.974 13:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # IFS=: 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # read -r var val 00:14:32.974 13:57:12 -- accel/accel.sh@20 -- # val= 00:14:32.974 13:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # IFS=: 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # read -r var val 00:14:32.974 13:57:12 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:14:32.974 13:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:32.974 13:57:12 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # IFS=: 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # read -r var val 00:14:32.974 13:57:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:32.974 13:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # IFS=: 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # read -r var val 00:14:32.974 13:57:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:32.974 13:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # IFS=: 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # read -r var val 00:14:32.974 13:57:12 -- accel/accel.sh@20 -- # val= 00:14:32.974 13:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # IFS=: 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # read -r var val 00:14:32.974 13:57:12 -- accel/accel.sh@20 -- # val=software 00:14:32.974 13:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:32.974 13:57:12 -- accel/accel.sh@22 -- # accel_module=software 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # IFS=: 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # read -r var val 00:14:32.974 13:57:12 -- accel/accel.sh@20 -- # val=32 00:14:32.974 13:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # IFS=: 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # read -r var val 00:14:32.974 13:57:12 -- accel/accel.sh@20 -- # val=32 00:14:32.974 13:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # IFS=: 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # read -r var val 00:14:32.974 13:57:12 -- accel/accel.sh@20 -- # val=1 00:14:32.974 13:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # IFS=: 00:14:32.974 13:57:12 -- accel/accel.sh@19 -- # read -r var val 00:14:32.975 13:57:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:32.975 13:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:32.975 13:57:12 -- accel/accel.sh@19 -- # IFS=: 00:14:32.975 13:57:12 -- accel/accel.sh@19 -- # read -r var val 00:14:32.975 13:57:12 -- accel/accel.sh@20 -- # val=No 00:14:32.975 13:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:32.975 13:57:12 -- accel/accel.sh@19 -- # IFS=: 00:14:32.975 13:57:12 -- accel/accel.sh@19 -- # read -r var val 00:14:32.975 13:57:12 -- accel/accel.sh@20 -- # val= 00:14:32.975 13:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:32.975 13:57:12 -- accel/accel.sh@19 -- # IFS=: 00:14:32.975 13:57:12 -- accel/accel.sh@19 -- # read -r var val 00:14:32.975 13:57:12 -- accel/accel.sh@20 -- # val= 00:14:32.975 13:57:12 -- accel/accel.sh@21 -- # case "$var" in 00:14:32.975 13:57:12 -- accel/accel.sh@19 -- # IFS=: 00:14:32.975 13:57:12 -- accel/accel.sh@19 -- # read -r var val 00:14:34.876 13:57:14 -- accel/accel.sh@20 -- # val= 00:14:34.876 13:57:14 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.876 13:57:14 -- accel/accel.sh@19 -- # IFS=: 00:14:34.876 13:57:14 -- accel/accel.sh@19 -- # read -r var val 00:14:34.876 13:57:14 -- accel/accel.sh@20 -- # val= 00:14:34.876 13:57:14 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.876 13:57:14 -- accel/accel.sh@19 -- # IFS=: 00:14:34.876 13:57:14 -- accel/accel.sh@19 -- # read -r var val 00:14:34.876 13:57:14 -- accel/accel.sh@20 -- # val= 00:14:34.876 13:57:14 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.876 13:57:14 -- accel/accel.sh@19 -- # IFS=: 00:14:34.876 13:57:14 -- accel/accel.sh@19 -- # read -r var val 00:14:34.876 13:57:14 -- accel/accel.sh@20 -- # val= 00:14:34.876 13:57:14 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.876 13:57:14 -- accel/accel.sh@19 -- # IFS=: 00:14:34.876 13:57:14 -- accel/accel.sh@19 -- # read -r var val 00:14:34.876 13:57:14 -- accel/accel.sh@20 -- # val= 00:14:34.876 13:57:14 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.876 13:57:14 -- accel/accel.sh@19 -- # IFS=: 00:14:34.876 13:57:14 -- accel/accel.sh@19 -- # read -r var val 00:14:34.876 13:57:14 -- accel/accel.sh@20 -- # val= 00:14:34.876 13:57:14 -- accel/accel.sh@21 -- # case "$var" in 00:14:34.876 13:57:14 -- accel/accel.sh@19 -- # IFS=: 00:14:34.876 13:57:14 -- accel/accel.sh@19 -- # read -r var val 00:14:34.876 13:57:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:34.876 13:57:14 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:14:34.876 13:57:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:34.876 00:14:34.876 real 0m2.696s 00:14:34.876 user 0m2.415s 00:14:34.876 sys 0m0.186s 00:14:34.876 13:57:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:34.876 13:57:14 -- common/autotest_common.sh@10 -- # set +x 00:14:34.876 ************************************ 00:14:34.876 END TEST accel_dif_generate_copy 00:14:34.876 ************************************ 00:14:34.876 13:57:14 -- accel/accel.sh@115 -- # [[ y == y ]] 00:14:34.877 13:57:14 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:34.877 13:57:14 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:14:34.877 13:57:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:34.877 13:57:14 -- common/autotest_common.sh@10 -- # set +x 00:14:35.136 ************************************ 00:14:35.136 START TEST accel_comp 00:14:35.136 ************************************ 00:14:35.136 13:57:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:35.136 13:57:14 -- accel/accel.sh@16 -- # local accel_opc 00:14:35.136 13:57:14 -- accel/accel.sh@17 -- # local accel_module 00:14:35.136 13:57:14 -- accel/accel.sh@19 -- # IFS=: 00:14:35.136 13:57:14 -- accel/accel.sh@19 -- # read -r var val 00:14:35.136 13:57:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:35.136 13:57:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:35.136 13:57:14 -- accel/accel.sh@12 -- # build_accel_config 00:14:35.136 13:57:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:35.136 13:57:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:35.136 13:57:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:35.136 13:57:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:35.136 13:57:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:35.136 13:57:14 -- accel/accel.sh@40 -- # local IFS=, 00:14:35.136 13:57:14 -- accel/accel.sh@41 -- # jq -r . 00:14:35.136 [2024-04-26 13:57:14.616408] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:35.136 [2024-04-26 13:57:14.616516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65024 ] 00:14:35.136 [2024-04-26 13:57:14.786475] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.394 [2024-04-26 13:57:15.019154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.653 13:57:15 -- accel/accel.sh@20 -- # val= 00:14:35.653 13:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.653 13:57:15 -- accel/accel.sh@19 -- # IFS=: 00:14:35.653 13:57:15 -- accel/accel.sh@19 -- # read -r var val 00:14:35.653 13:57:15 -- accel/accel.sh@20 -- # val= 00:14:35.653 13:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.653 13:57:15 -- accel/accel.sh@19 -- # IFS=: 00:14:35.653 13:57:15 -- accel/accel.sh@19 -- # read -r var val 00:14:35.653 13:57:15 -- accel/accel.sh@20 -- # val= 00:14:35.653 13:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.653 13:57:15 -- accel/accel.sh@19 -- # IFS=: 00:14:35.653 13:57:15 -- accel/accel.sh@19 -- # read -r var val 00:14:35.653 13:57:15 -- accel/accel.sh@20 -- # val=0x1 00:14:35.653 13:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.653 13:57:15 -- accel/accel.sh@19 -- # IFS=: 00:14:35.653 13:57:15 -- accel/accel.sh@19 -- # read -r var val 00:14:35.653 13:57:15 -- accel/accel.sh@20 -- # val= 00:14:35.653 13:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.653 13:57:15 -- accel/accel.sh@19 -- # IFS=: 00:14:35.653 13:57:15 -- accel/accel.sh@19 -- # read -r var val 00:14:35.653 13:57:15 -- accel/accel.sh@20 -- # val= 00:14:35.653 13:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.653 13:57:15 -- accel/accel.sh@19 -- # IFS=: 00:14:35.653 13:57:15 -- accel/accel.sh@19 -- # read -r var val 00:14:35.653 13:57:15 -- accel/accel.sh@20 -- # val=compress 00:14:35.653 13:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.653 13:57:15 -- accel/accel.sh@23 -- # accel_opc=compress 00:14:35.653 13:57:15 -- accel/accel.sh@19 -- # IFS=: 00:14:35.653 13:57:15 -- accel/accel.sh@19 -- # read -r var val 00:14:35.653 13:57:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:35.653 13:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.653 13:57:15 -- accel/accel.sh@19 -- # IFS=: 00:14:35.653 13:57:15 -- accel/accel.sh@19 -- # read -r var val 00:14:35.654 13:57:15 -- accel/accel.sh@20 -- # val= 00:14:35.654 13:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # IFS=: 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # read -r var val 00:14:35.654 13:57:15 -- accel/accel.sh@20 -- # val=software 00:14:35.654 13:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.654 13:57:15 -- accel/accel.sh@22 -- # accel_module=software 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # IFS=: 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # read -r var val 00:14:35.654 13:57:15 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:35.654 13:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # IFS=: 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # read -r var val 00:14:35.654 13:57:15 -- accel/accel.sh@20 -- # val=32 00:14:35.654 13:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # IFS=: 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # read -r var val 00:14:35.654 13:57:15 -- accel/accel.sh@20 -- # val=32 00:14:35.654 13:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # IFS=: 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # read -r var val 00:14:35.654 13:57:15 -- accel/accel.sh@20 -- # val=1 00:14:35.654 13:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # IFS=: 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # read -r var val 00:14:35.654 13:57:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:35.654 13:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # IFS=: 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # read -r var val 00:14:35.654 13:57:15 -- accel/accel.sh@20 -- # val=No 00:14:35.654 13:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # IFS=: 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # read -r var val 00:14:35.654 13:57:15 -- accel/accel.sh@20 -- # val= 00:14:35.654 13:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # IFS=: 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # read -r var val 00:14:35.654 13:57:15 -- accel/accel.sh@20 -- # val= 00:14:35.654 13:57:15 -- accel/accel.sh@21 -- # case "$var" in 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # IFS=: 00:14:35.654 13:57:15 -- accel/accel.sh@19 -- # read -r var val 00:14:37.576 13:57:17 -- accel/accel.sh@20 -- # val= 00:14:37.576 13:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.576 13:57:17 -- accel/accel.sh@19 -- # IFS=: 00:14:37.576 13:57:17 -- accel/accel.sh@19 -- # read -r var val 00:14:37.576 13:57:17 -- accel/accel.sh@20 -- # val= 00:14:37.576 13:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.576 13:57:17 -- accel/accel.sh@19 -- # IFS=: 00:14:37.576 13:57:17 -- accel/accel.sh@19 -- # read -r var val 00:14:37.576 13:57:17 -- accel/accel.sh@20 -- # val= 00:14:37.576 13:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.576 13:57:17 -- accel/accel.sh@19 -- # IFS=: 00:14:37.576 13:57:17 -- accel/accel.sh@19 -- # read -r var val 00:14:37.576 13:57:17 -- accel/accel.sh@20 -- # val= 00:14:37.576 13:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.576 13:57:17 -- accel/accel.sh@19 -- # IFS=: 00:14:37.576 13:57:17 -- accel/accel.sh@19 -- # read -r var val 00:14:37.576 13:57:17 -- accel/accel.sh@20 -- # val= 00:14:37.576 13:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.576 13:57:17 -- accel/accel.sh@19 -- # IFS=: 00:14:37.576 13:57:17 -- accel/accel.sh@19 -- # read -r var val 00:14:37.576 13:57:17 -- accel/accel.sh@20 -- # val= 00:14:37.576 13:57:17 -- accel/accel.sh@21 -- # case "$var" in 00:14:37.576 13:57:17 -- accel/accel.sh@19 -- # IFS=: 00:14:37.576 13:57:17 -- accel/accel.sh@19 -- # read -r var val 00:14:37.835 13:57:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:37.835 13:57:17 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:14:37.835 13:57:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:37.835 00:14:37.835 real 0m2.696s 00:14:37.835 user 0m2.424s 00:14:37.835 sys 0m0.183s 00:14:37.835 13:57:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:37.835 13:57:17 -- common/autotest_common.sh@10 -- # set +x 00:14:37.835 ************************************ 00:14:37.835 END TEST accel_comp 00:14:37.835 ************************************ 00:14:37.835 13:57:17 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:37.835 13:57:17 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:14:37.835 13:57:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:37.835 13:57:17 -- common/autotest_common.sh@10 -- # set +x 00:14:37.835 ************************************ 00:14:37.835 START TEST accel_decomp 00:14:37.835 ************************************ 00:14:37.835 13:57:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:37.835 13:57:17 -- accel/accel.sh@16 -- # local accel_opc 00:14:37.835 13:57:17 -- accel/accel.sh@17 -- # local accel_module 00:14:37.835 13:57:17 -- accel/accel.sh@19 -- # IFS=: 00:14:37.835 13:57:17 -- accel/accel.sh@19 -- # read -r var val 00:14:37.835 13:57:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:37.835 13:57:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:14:37.835 13:57:17 -- accel/accel.sh@12 -- # build_accel_config 00:14:37.835 13:57:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:37.835 13:57:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:37.835 13:57:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:37.835 13:57:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:37.835 13:57:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:37.835 13:57:17 -- accel/accel.sh@40 -- # local IFS=, 00:14:37.835 13:57:17 -- accel/accel.sh@41 -- # jq -r . 00:14:37.835 [2024-04-26 13:57:17.459982] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:37.835 [2024-04-26 13:57:17.460369] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65080 ] 00:14:38.093 [2024-04-26 13:57:17.633878] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.351 [2024-04-26 13:57:17.868926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.609 13:57:18 -- accel/accel.sh@20 -- # val= 00:14:38.609 13:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # IFS=: 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # read -r var val 00:14:38.609 13:57:18 -- accel/accel.sh@20 -- # val= 00:14:38.609 13:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # IFS=: 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # read -r var val 00:14:38.609 13:57:18 -- accel/accel.sh@20 -- # val= 00:14:38.609 13:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # IFS=: 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # read -r var val 00:14:38.609 13:57:18 -- accel/accel.sh@20 -- # val=0x1 00:14:38.609 13:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # IFS=: 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # read -r var val 00:14:38.609 13:57:18 -- accel/accel.sh@20 -- # val= 00:14:38.609 13:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # IFS=: 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # read -r var val 00:14:38.609 13:57:18 -- accel/accel.sh@20 -- # val= 00:14:38.609 13:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # IFS=: 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # read -r var val 00:14:38.609 13:57:18 -- accel/accel.sh@20 -- # val=decompress 00:14:38.609 13:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.609 13:57:18 -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # IFS=: 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # read -r var val 00:14:38.609 13:57:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:38.609 13:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # IFS=: 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # read -r var val 00:14:38.609 13:57:18 -- accel/accel.sh@20 -- # val= 00:14:38.609 13:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # IFS=: 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # read -r var val 00:14:38.609 13:57:18 -- accel/accel.sh@20 -- # val=software 00:14:38.609 13:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.609 13:57:18 -- accel/accel.sh@22 -- # accel_module=software 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # IFS=: 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # read -r var val 00:14:38.609 13:57:18 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:38.609 13:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # IFS=: 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # read -r var val 00:14:38.609 13:57:18 -- accel/accel.sh@20 -- # val=32 00:14:38.609 13:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # IFS=: 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # read -r var val 00:14:38.609 13:57:18 -- accel/accel.sh@20 -- # val=32 00:14:38.609 13:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # IFS=: 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # read -r var val 00:14:38.609 13:57:18 -- accel/accel.sh@20 -- # val=1 00:14:38.609 13:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # IFS=: 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # read -r var val 00:14:38.609 13:57:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:38.609 13:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # IFS=: 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # read -r var val 00:14:38.609 13:57:18 -- accel/accel.sh@20 -- # val=Yes 00:14:38.609 13:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # IFS=: 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # read -r var val 00:14:38.609 13:57:18 -- accel/accel.sh@20 -- # val= 00:14:38.609 13:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # IFS=: 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # read -r var val 00:14:38.609 13:57:18 -- accel/accel.sh@20 -- # val= 00:14:38.609 13:57:18 -- accel/accel.sh@21 -- # case "$var" in 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # IFS=: 00:14:38.609 13:57:18 -- accel/accel.sh@19 -- # read -r var val 00:14:40.509 13:57:20 -- accel/accel.sh@20 -- # val= 00:14:40.509 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.509 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:40.509 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:40.509 13:57:20 -- accel/accel.sh@20 -- # val= 00:14:40.509 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.509 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:40.509 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:40.509 13:57:20 -- accel/accel.sh@20 -- # val= 00:14:40.509 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.509 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:40.509 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:40.509 13:57:20 -- accel/accel.sh@20 -- # val= 00:14:40.509 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.509 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:40.509 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:40.509 13:57:20 -- accel/accel.sh@20 -- # val= 00:14:40.509 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.509 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:40.509 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:40.509 13:57:20 -- accel/accel.sh@20 -- # val= 00:14:40.509 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:40.509 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:40.509 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:40.509 13:57:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:40.509 13:57:20 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:40.509 ************************************ 00:14:40.509 END TEST accel_decomp 00:14:40.509 ************************************ 00:14:40.509 13:57:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:40.509 00:14:40.509 real 0m2.696s 00:14:40.509 user 0m2.412s 00:14:40.509 sys 0m0.193s 00:14:40.509 13:57:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:40.510 13:57:20 -- common/autotest_common.sh@10 -- # set +x 00:14:40.510 13:57:20 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:40.510 13:57:20 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:14:40.510 13:57:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:40.510 13:57:20 -- common/autotest_common.sh@10 -- # set +x 00:14:40.833 ************************************ 00:14:40.833 START TEST accel_decmop_full 00:14:40.833 ************************************ 00:14:40.833 13:57:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:40.833 13:57:20 -- accel/accel.sh@16 -- # local accel_opc 00:14:40.833 13:57:20 -- accel/accel.sh@17 -- # local accel_module 00:14:40.833 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:40.833 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:40.833 13:57:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:40.833 13:57:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:14:40.833 13:57:20 -- accel/accel.sh@12 -- # build_accel_config 00:14:40.833 13:57:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:40.833 13:57:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:40.833 13:57:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:40.833 13:57:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:40.833 13:57:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:40.833 13:57:20 -- accel/accel.sh@40 -- # local IFS=, 00:14:40.833 13:57:20 -- accel/accel.sh@41 -- # jq -r . 00:14:40.833 [2024-04-26 13:57:20.315189] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:40.833 [2024-04-26 13:57:20.315315] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65125 ] 00:14:41.106 [2024-04-26 13:57:20.488235] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.106 [2024-04-26 13:57:20.725888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.365 13:57:20 -- accel/accel.sh@20 -- # val= 00:14:41.365 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:41.365 13:57:20 -- accel/accel.sh@20 -- # val= 00:14:41.365 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:41.365 13:57:20 -- accel/accel.sh@20 -- # val= 00:14:41.365 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:41.365 13:57:20 -- accel/accel.sh@20 -- # val=0x1 00:14:41.365 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:41.365 13:57:20 -- accel/accel.sh@20 -- # val= 00:14:41.365 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:41.365 13:57:20 -- accel/accel.sh@20 -- # val= 00:14:41.365 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:41.365 13:57:20 -- accel/accel.sh@20 -- # val=decompress 00:14:41.365 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.365 13:57:20 -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:41.365 13:57:20 -- accel/accel.sh@20 -- # val='111250 bytes' 00:14:41.365 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:41.365 13:57:20 -- accel/accel.sh@20 -- # val= 00:14:41.365 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:41.365 13:57:20 -- accel/accel.sh@20 -- # val=software 00:14:41.365 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.365 13:57:20 -- accel/accel.sh@22 -- # accel_module=software 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:41.365 13:57:20 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:41.365 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:41.365 13:57:20 -- accel/accel.sh@20 -- # val=32 00:14:41.365 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:41.365 13:57:20 -- accel/accel.sh@20 -- # val=32 00:14:41.365 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:41.365 13:57:20 -- accel/accel.sh@20 -- # val=1 00:14:41.365 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:41.365 13:57:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:41.365 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:41.365 13:57:20 -- accel/accel.sh@20 -- # val=Yes 00:14:41.365 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:41.365 13:57:20 -- accel/accel.sh@20 -- # val= 00:14:41.365 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # IFS=: 00:14:41.365 13:57:20 -- accel/accel.sh@19 -- # read -r var val 00:14:41.365 13:57:20 -- accel/accel.sh@20 -- # val= 00:14:41.365 13:57:20 -- accel/accel.sh@21 -- # case "$var" in 00:14:41.365 13:57:21 -- accel/accel.sh@19 -- # IFS=: 00:14:41.365 13:57:21 -- accel/accel.sh@19 -- # read -r var val 00:14:43.269 13:57:22 -- accel/accel.sh@20 -- # val= 00:14:43.269 13:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.269 13:57:22 -- accel/accel.sh@19 -- # IFS=: 00:14:43.269 13:57:22 -- accel/accel.sh@19 -- # read -r var val 00:14:43.269 13:57:22 -- accel/accel.sh@20 -- # val= 00:14:43.269 13:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.269 13:57:22 -- accel/accel.sh@19 -- # IFS=: 00:14:43.269 13:57:22 -- accel/accel.sh@19 -- # read -r var val 00:14:43.269 13:57:22 -- accel/accel.sh@20 -- # val= 00:14:43.269 13:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.269 13:57:22 -- accel/accel.sh@19 -- # IFS=: 00:14:43.269 13:57:22 -- accel/accel.sh@19 -- # read -r var val 00:14:43.269 13:57:22 -- accel/accel.sh@20 -- # val= 00:14:43.269 13:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.269 13:57:22 -- accel/accel.sh@19 -- # IFS=: 00:14:43.269 13:57:22 -- accel/accel.sh@19 -- # read -r var val 00:14:43.269 13:57:22 -- accel/accel.sh@20 -- # val= 00:14:43.269 13:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.269 13:57:22 -- accel/accel.sh@19 -- # IFS=: 00:14:43.269 13:57:22 -- accel/accel.sh@19 -- # read -r var val 00:14:43.269 13:57:22 -- accel/accel.sh@20 -- # val= 00:14:43.269 13:57:22 -- accel/accel.sh@21 -- # case "$var" in 00:14:43.269 13:57:22 -- accel/accel.sh@19 -- # IFS=: 00:14:43.269 13:57:22 -- accel/accel.sh@19 -- # read -r var val 00:14:43.527 13:57:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:43.527 13:57:22 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:43.527 13:57:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:43.527 00:14:43.527 real 0m2.702s 00:14:43.527 user 0m2.421s 00:14:43.527 sys 0m0.190s 00:14:43.527 13:57:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:43.527 13:57:22 -- common/autotest_common.sh@10 -- # set +x 00:14:43.527 ************************************ 00:14:43.527 END TEST accel_decmop_full 00:14:43.527 ************************************ 00:14:43.527 13:57:23 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:43.527 13:57:23 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:14:43.527 13:57:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:43.527 13:57:23 -- common/autotest_common.sh@10 -- # set +x 00:14:43.527 ************************************ 00:14:43.527 START TEST accel_decomp_mcore 00:14:43.527 ************************************ 00:14:43.527 13:57:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:43.527 13:57:23 -- accel/accel.sh@16 -- # local accel_opc 00:14:43.527 13:57:23 -- accel/accel.sh@17 -- # local accel_module 00:14:43.527 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:43.527 13:57:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:43.527 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:43.527 13:57:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:14:43.527 13:57:23 -- accel/accel.sh@12 -- # build_accel_config 00:14:43.527 13:57:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:43.527 13:57:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:43.527 13:57:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:43.527 13:57:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:43.527 13:57:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:43.527 13:57:23 -- accel/accel.sh@40 -- # local IFS=, 00:14:43.527 13:57:23 -- accel/accel.sh@41 -- # jq -r . 00:14:43.527 [2024-04-26 13:57:23.163908] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:43.527 [2024-04-26 13:57:23.164012] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65181 ] 00:14:43.785 [2024-04-26 13:57:23.329023] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:44.044 [2024-04-26 13:57:23.566730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.044 [2024-04-26 13:57:23.566780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.044 [2024-04-26 13:57:23.566833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.044 [2024-04-26 13:57:23.566870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:44.303 13:57:23 -- accel/accel.sh@20 -- # val= 00:14:44.303 13:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:44.303 13:57:23 -- accel/accel.sh@20 -- # val= 00:14:44.303 13:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:44.303 13:57:23 -- accel/accel.sh@20 -- # val= 00:14:44.303 13:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:44.303 13:57:23 -- accel/accel.sh@20 -- # val=0xf 00:14:44.303 13:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:44.303 13:57:23 -- accel/accel.sh@20 -- # val= 00:14:44.303 13:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:44.303 13:57:23 -- accel/accel.sh@20 -- # val= 00:14:44.303 13:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:44.303 13:57:23 -- accel/accel.sh@20 -- # val=decompress 00:14:44.303 13:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:44.303 13:57:23 -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:44.303 13:57:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:44.303 13:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:44.303 13:57:23 -- accel/accel.sh@20 -- # val= 00:14:44.303 13:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:44.303 13:57:23 -- accel/accel.sh@20 -- # val=software 00:14:44.303 13:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:44.303 13:57:23 -- accel/accel.sh@22 -- # accel_module=software 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:44.303 13:57:23 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:44.303 13:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:44.303 13:57:23 -- accel/accel.sh@20 -- # val=32 00:14:44.303 13:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:44.303 13:57:23 -- accel/accel.sh@20 -- # val=32 00:14:44.303 13:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:44.303 13:57:23 -- accel/accel.sh@20 -- # val=1 00:14:44.303 13:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:44.303 13:57:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:44.303 13:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:44.303 13:57:23 -- accel/accel.sh@20 -- # val=Yes 00:14:44.303 13:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:44.303 13:57:23 -- accel/accel.sh@20 -- # val= 00:14:44.303 13:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:44.303 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:44.304 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:44.304 13:57:23 -- accel/accel.sh@20 -- # val= 00:14:44.304 13:57:23 -- accel/accel.sh@21 -- # case "$var" in 00:14:44.304 13:57:23 -- accel/accel.sh@19 -- # IFS=: 00:14:44.304 13:57:23 -- accel/accel.sh@19 -- # read -r var val 00:14:46.207 13:57:25 -- accel/accel.sh@20 -- # val= 00:14:46.207 13:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.207 13:57:25 -- accel/accel.sh@19 -- # IFS=: 00:14:46.207 13:57:25 -- accel/accel.sh@19 -- # read -r var val 00:14:46.207 13:57:25 -- accel/accel.sh@20 -- # val= 00:14:46.207 13:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.207 13:57:25 -- accel/accel.sh@19 -- # IFS=: 00:14:46.207 13:57:25 -- accel/accel.sh@19 -- # read -r var val 00:14:46.207 13:57:25 -- accel/accel.sh@20 -- # val= 00:14:46.207 13:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.207 13:57:25 -- accel/accel.sh@19 -- # IFS=: 00:14:46.207 13:57:25 -- accel/accel.sh@19 -- # read -r var val 00:14:46.207 13:57:25 -- accel/accel.sh@20 -- # val= 00:14:46.207 13:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.207 13:57:25 -- accel/accel.sh@19 -- # IFS=: 00:14:46.207 13:57:25 -- accel/accel.sh@19 -- # read -r var val 00:14:46.207 13:57:25 -- accel/accel.sh@20 -- # val= 00:14:46.207 13:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.207 13:57:25 -- accel/accel.sh@19 -- # IFS=: 00:14:46.207 13:57:25 -- accel/accel.sh@19 -- # read -r var val 00:14:46.207 13:57:25 -- accel/accel.sh@20 -- # val= 00:14:46.207 13:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.207 13:57:25 -- accel/accel.sh@19 -- # IFS=: 00:14:46.207 13:57:25 -- accel/accel.sh@19 -- # read -r var val 00:14:46.207 13:57:25 -- accel/accel.sh@20 -- # val= 00:14:46.207 13:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.207 13:57:25 -- accel/accel.sh@19 -- # IFS=: 00:14:46.207 13:57:25 -- accel/accel.sh@19 -- # read -r var val 00:14:46.207 13:57:25 -- accel/accel.sh@20 -- # val= 00:14:46.207 13:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.207 13:57:25 -- accel/accel.sh@19 -- # IFS=: 00:14:46.207 13:57:25 -- accel/accel.sh@19 -- # read -r var val 00:14:46.207 13:57:25 -- accel/accel.sh@20 -- # val= 00:14:46.207 13:57:25 -- accel/accel.sh@21 -- # case "$var" in 00:14:46.207 13:57:25 -- accel/accel.sh@19 -- # IFS=: 00:14:46.207 13:57:25 -- accel/accel.sh@19 -- # read -r var val 00:14:46.207 13:57:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:46.207 13:57:25 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:46.207 13:57:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:46.207 00:14:46.207 real 0m2.739s 00:14:46.207 user 0m7.869s 00:14:46.207 sys 0m0.225s 00:14:46.207 13:57:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:46.207 13:57:25 -- common/autotest_common.sh@10 -- # set +x 00:14:46.207 ************************************ 00:14:46.207 END TEST accel_decomp_mcore 00:14:46.207 ************************************ 00:14:46.465 13:57:25 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:46.465 13:57:25 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:14:46.465 13:57:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:46.465 13:57:25 -- common/autotest_common.sh@10 -- # set +x 00:14:46.465 ************************************ 00:14:46.465 START TEST accel_decomp_full_mcore 00:14:46.465 ************************************ 00:14:46.465 13:57:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:46.465 13:57:26 -- accel/accel.sh@16 -- # local accel_opc 00:14:46.465 13:57:26 -- accel/accel.sh@17 -- # local accel_module 00:14:46.465 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:46.465 13:57:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:46.465 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:46.465 13:57:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:14:46.465 13:57:26 -- accel/accel.sh@12 -- # build_accel_config 00:14:46.465 13:57:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:46.465 13:57:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:46.465 13:57:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:46.465 13:57:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:46.465 13:57:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:46.465 13:57:26 -- accel/accel.sh@40 -- # local IFS=, 00:14:46.465 13:57:26 -- accel/accel.sh@41 -- # jq -r . 00:14:46.465 [2024-04-26 13:57:26.060458] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:46.466 [2024-04-26 13:57:26.060566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65240 ] 00:14:46.734 [2024-04-26 13:57:26.231885] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:46.992 [2024-04-26 13:57:26.473330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.992 [2024-04-26 13:57:26.473537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.992 [2024-04-26 13:57:26.474509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.992 [2024-04-26 13:57:26.474538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:47.251 13:57:26 -- accel/accel.sh@20 -- # val= 00:14:47.251 13:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:47.251 13:57:26 -- accel/accel.sh@20 -- # val= 00:14:47.251 13:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:47.251 13:57:26 -- accel/accel.sh@20 -- # val= 00:14:47.251 13:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:47.251 13:57:26 -- accel/accel.sh@20 -- # val=0xf 00:14:47.251 13:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:47.251 13:57:26 -- accel/accel.sh@20 -- # val= 00:14:47.251 13:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:47.251 13:57:26 -- accel/accel.sh@20 -- # val= 00:14:47.251 13:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:47.251 13:57:26 -- accel/accel.sh@20 -- # val=decompress 00:14:47.251 13:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.251 13:57:26 -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:47.251 13:57:26 -- accel/accel.sh@20 -- # val='111250 bytes' 00:14:47.251 13:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:47.251 13:57:26 -- accel/accel.sh@20 -- # val= 00:14:47.251 13:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:47.251 13:57:26 -- accel/accel.sh@20 -- # val=software 00:14:47.251 13:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.251 13:57:26 -- accel/accel.sh@22 -- # accel_module=software 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:47.251 13:57:26 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:47.251 13:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:47.251 13:57:26 -- accel/accel.sh@20 -- # val=32 00:14:47.251 13:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:47.251 13:57:26 -- accel/accel.sh@20 -- # val=32 00:14:47.251 13:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:47.251 13:57:26 -- accel/accel.sh@20 -- # val=1 00:14:47.251 13:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:47.251 13:57:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:47.251 13:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:47.251 13:57:26 -- accel/accel.sh@20 -- # val=Yes 00:14:47.251 13:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:47.251 13:57:26 -- accel/accel.sh@20 -- # val= 00:14:47.251 13:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:47.251 13:57:26 -- accel/accel.sh@20 -- # val= 00:14:47.251 13:57:26 -- accel/accel.sh@21 -- # case "$var" in 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # IFS=: 00:14:47.251 13:57:26 -- accel/accel.sh@19 -- # read -r var val 00:14:49.155 13:57:28 -- accel/accel.sh@20 -- # val= 00:14:49.155 13:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:49.155 13:57:28 -- accel/accel.sh@19 -- # IFS=: 00:14:49.155 13:57:28 -- accel/accel.sh@19 -- # read -r var val 00:14:49.155 13:57:28 -- accel/accel.sh@20 -- # val= 00:14:49.155 13:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:49.155 13:57:28 -- accel/accel.sh@19 -- # IFS=: 00:14:49.155 13:57:28 -- accel/accel.sh@19 -- # read -r var val 00:14:49.155 13:57:28 -- accel/accel.sh@20 -- # val= 00:14:49.155 13:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:49.155 13:57:28 -- accel/accel.sh@19 -- # IFS=: 00:14:49.155 13:57:28 -- accel/accel.sh@19 -- # read -r var val 00:14:49.155 13:57:28 -- accel/accel.sh@20 -- # val= 00:14:49.155 13:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:49.155 13:57:28 -- accel/accel.sh@19 -- # IFS=: 00:14:49.155 13:57:28 -- accel/accel.sh@19 -- # read -r var val 00:14:49.155 13:57:28 -- accel/accel.sh@20 -- # val= 00:14:49.155 13:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:49.155 13:57:28 -- accel/accel.sh@19 -- # IFS=: 00:14:49.155 13:57:28 -- accel/accel.sh@19 -- # read -r var val 00:14:49.155 13:57:28 -- accel/accel.sh@20 -- # val= 00:14:49.155 13:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:49.155 13:57:28 -- accel/accel.sh@19 -- # IFS=: 00:14:49.155 13:57:28 -- accel/accel.sh@19 -- # read -r var val 00:14:49.155 13:57:28 -- accel/accel.sh@20 -- # val= 00:14:49.155 13:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:49.155 13:57:28 -- accel/accel.sh@19 -- # IFS=: 00:14:49.155 13:57:28 -- accel/accel.sh@19 -- # read -r var val 00:14:49.155 13:57:28 -- accel/accel.sh@20 -- # val= 00:14:49.155 13:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:49.155 13:57:28 -- accel/accel.sh@19 -- # IFS=: 00:14:49.155 13:57:28 -- accel/accel.sh@19 -- # read -r var val 00:14:49.155 13:57:28 -- accel/accel.sh@20 -- # val= 00:14:49.155 13:57:28 -- accel/accel.sh@21 -- # case "$var" in 00:14:49.155 13:57:28 -- accel/accel.sh@19 -- # IFS=: 00:14:49.155 13:57:28 -- accel/accel.sh@19 -- # read -r var val 00:14:49.155 13:57:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:49.155 13:57:28 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:49.155 13:57:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:49.155 00:14:49.155 real 0m2.772s 00:14:49.155 user 0m8.004s 00:14:49.155 sys 0m0.214s 00:14:49.155 ************************************ 00:14:49.155 END TEST accel_decomp_full_mcore 00:14:49.155 ************************************ 00:14:49.155 13:57:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:49.155 13:57:28 -- common/autotest_common.sh@10 -- # set +x 00:14:49.414 13:57:28 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:49.414 13:57:28 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:14:49.414 13:57:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:49.414 13:57:28 -- common/autotest_common.sh@10 -- # set +x 00:14:49.414 ************************************ 00:14:49.414 START TEST accel_decomp_mthread 00:14:49.414 ************************************ 00:14:49.414 13:57:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:49.414 13:57:28 -- accel/accel.sh@16 -- # local accel_opc 00:14:49.414 13:57:28 -- accel/accel.sh@17 -- # local accel_module 00:14:49.414 13:57:28 -- accel/accel.sh@19 -- # IFS=: 00:14:49.414 13:57:28 -- accel/accel.sh@19 -- # read -r var val 00:14:49.414 13:57:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:49.414 13:57:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:14:49.414 13:57:28 -- accel/accel.sh@12 -- # build_accel_config 00:14:49.414 13:57:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:49.414 13:57:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:49.414 13:57:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:49.414 13:57:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:49.414 13:57:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:49.414 13:57:28 -- accel/accel.sh@40 -- # local IFS=, 00:14:49.414 13:57:28 -- accel/accel.sh@41 -- # jq -r . 00:14:49.414 [2024-04-26 13:57:28.989807] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:49.414 [2024-04-26 13:57:28.989928] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65294 ] 00:14:49.673 [2024-04-26 13:57:29.156222] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.931 [2024-04-26 13:57:29.394104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.190 13:57:29 -- accel/accel.sh@20 -- # val= 00:14:50.190 13:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # IFS=: 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # read -r var val 00:14:50.190 13:57:29 -- accel/accel.sh@20 -- # val= 00:14:50.190 13:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # IFS=: 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # read -r var val 00:14:50.190 13:57:29 -- accel/accel.sh@20 -- # val= 00:14:50.190 13:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # IFS=: 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # read -r var val 00:14:50.190 13:57:29 -- accel/accel.sh@20 -- # val=0x1 00:14:50.190 13:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # IFS=: 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # read -r var val 00:14:50.190 13:57:29 -- accel/accel.sh@20 -- # val= 00:14:50.190 13:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # IFS=: 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # read -r var val 00:14:50.190 13:57:29 -- accel/accel.sh@20 -- # val= 00:14:50.190 13:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # IFS=: 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # read -r var val 00:14:50.190 13:57:29 -- accel/accel.sh@20 -- # val=decompress 00:14:50.190 13:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.190 13:57:29 -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # IFS=: 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # read -r var val 00:14:50.190 13:57:29 -- accel/accel.sh@20 -- # val='4096 bytes' 00:14:50.190 13:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # IFS=: 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # read -r var val 00:14:50.190 13:57:29 -- accel/accel.sh@20 -- # val= 00:14:50.190 13:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # IFS=: 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # read -r var val 00:14:50.190 13:57:29 -- accel/accel.sh@20 -- # val=software 00:14:50.190 13:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.190 13:57:29 -- accel/accel.sh@22 -- # accel_module=software 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # IFS=: 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # read -r var val 00:14:50.190 13:57:29 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:50.190 13:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # IFS=: 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # read -r var val 00:14:50.190 13:57:29 -- accel/accel.sh@20 -- # val=32 00:14:50.190 13:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # IFS=: 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # read -r var val 00:14:50.190 13:57:29 -- accel/accel.sh@20 -- # val=32 00:14:50.190 13:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # IFS=: 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # read -r var val 00:14:50.190 13:57:29 -- accel/accel.sh@20 -- # val=2 00:14:50.190 13:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # IFS=: 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # read -r var val 00:14:50.190 13:57:29 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:50.190 13:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # IFS=: 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # read -r var val 00:14:50.190 13:57:29 -- accel/accel.sh@20 -- # val=Yes 00:14:50.190 13:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # IFS=: 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # read -r var val 00:14:50.190 13:57:29 -- accel/accel.sh@20 -- # val= 00:14:50.190 13:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # IFS=: 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # read -r var val 00:14:50.190 13:57:29 -- accel/accel.sh@20 -- # val= 00:14:50.190 13:57:29 -- accel/accel.sh@21 -- # case "$var" in 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # IFS=: 00:14:50.190 13:57:29 -- accel/accel.sh@19 -- # read -r var val 00:14:52.095 13:57:31 -- accel/accel.sh@20 -- # val= 00:14:52.096 13:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.096 13:57:31 -- accel/accel.sh@19 -- # IFS=: 00:14:52.096 13:57:31 -- accel/accel.sh@19 -- # read -r var val 00:14:52.096 13:57:31 -- accel/accel.sh@20 -- # val= 00:14:52.096 13:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.096 13:57:31 -- accel/accel.sh@19 -- # IFS=: 00:14:52.096 13:57:31 -- accel/accel.sh@19 -- # read -r var val 00:14:52.096 13:57:31 -- accel/accel.sh@20 -- # val= 00:14:52.096 13:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.096 13:57:31 -- accel/accel.sh@19 -- # IFS=: 00:14:52.096 13:57:31 -- accel/accel.sh@19 -- # read -r var val 00:14:52.096 13:57:31 -- accel/accel.sh@20 -- # val= 00:14:52.096 13:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.096 13:57:31 -- accel/accel.sh@19 -- # IFS=: 00:14:52.096 13:57:31 -- accel/accel.sh@19 -- # read -r var val 00:14:52.096 13:57:31 -- accel/accel.sh@20 -- # val= 00:14:52.096 13:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.096 13:57:31 -- accel/accel.sh@19 -- # IFS=: 00:14:52.096 13:57:31 -- accel/accel.sh@19 -- # read -r var val 00:14:52.096 13:57:31 -- accel/accel.sh@20 -- # val= 00:14:52.096 13:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.096 13:57:31 -- accel/accel.sh@19 -- # IFS=: 00:14:52.096 13:57:31 -- accel/accel.sh@19 -- # read -r var val 00:14:52.096 13:57:31 -- accel/accel.sh@20 -- # val= 00:14:52.096 13:57:31 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.096 13:57:31 -- accel/accel.sh@19 -- # IFS=: 00:14:52.096 13:57:31 -- accel/accel.sh@19 -- # read -r var val 00:14:52.096 13:57:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:52.096 13:57:31 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:52.096 13:57:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:52.096 00:14:52.096 real 0m2.668s 00:14:52.096 user 0m2.391s 00:14:52.096 sys 0m0.187s 00:14:52.096 13:57:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:52.096 ************************************ 00:14:52.096 END TEST accel_decomp_mthread 00:14:52.096 ************************************ 00:14:52.096 13:57:31 -- common/autotest_common.sh@10 -- # set +x 00:14:52.096 13:57:31 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:52.096 13:57:31 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:14:52.096 13:57:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:52.096 13:57:31 -- common/autotest_common.sh@10 -- # set +x 00:14:52.096 ************************************ 00:14:52.096 START TEST accel_deomp_full_mthread 00:14:52.096 ************************************ 00:14:52.096 13:57:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:52.096 13:57:31 -- accel/accel.sh@16 -- # local accel_opc 00:14:52.096 13:57:31 -- accel/accel.sh@17 -- # local accel_module 00:14:52.096 13:57:31 -- accel/accel.sh@19 -- # IFS=: 00:14:52.096 13:57:31 -- accel/accel.sh@19 -- # read -r var val 00:14:52.096 13:57:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:52.096 13:57:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:14:52.096 13:57:31 -- accel/accel.sh@12 -- # build_accel_config 00:14:52.096 13:57:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:52.096 13:57:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:52.096 13:57:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:52.096 13:57:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:52.096 13:57:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:52.096 13:57:31 -- accel/accel.sh@40 -- # local IFS=, 00:14:52.096 13:57:31 -- accel/accel.sh@41 -- # jq -r . 00:14:52.355 [2024-04-26 13:57:31.816726] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:52.355 [2024-04-26 13:57:31.816846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65344 ] 00:14:52.355 [2024-04-26 13:57:31.987192] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.614 [2024-04-26 13:57:32.221734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.872 13:57:32 -- accel/accel.sh@20 -- # val= 00:14:52.872 13:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.872 13:57:32 -- accel/accel.sh@19 -- # IFS=: 00:14:52.872 13:57:32 -- accel/accel.sh@19 -- # read -r var val 00:14:52.872 13:57:32 -- accel/accel.sh@20 -- # val= 00:14:52.873 13:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # IFS=: 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # read -r var val 00:14:52.873 13:57:32 -- accel/accel.sh@20 -- # val= 00:14:52.873 13:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # IFS=: 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # read -r var val 00:14:52.873 13:57:32 -- accel/accel.sh@20 -- # val=0x1 00:14:52.873 13:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # IFS=: 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # read -r var val 00:14:52.873 13:57:32 -- accel/accel.sh@20 -- # val= 00:14:52.873 13:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # IFS=: 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # read -r var val 00:14:52.873 13:57:32 -- accel/accel.sh@20 -- # val= 00:14:52.873 13:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # IFS=: 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # read -r var val 00:14:52.873 13:57:32 -- accel/accel.sh@20 -- # val=decompress 00:14:52.873 13:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.873 13:57:32 -- accel/accel.sh@23 -- # accel_opc=decompress 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # IFS=: 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # read -r var val 00:14:52.873 13:57:32 -- accel/accel.sh@20 -- # val='111250 bytes' 00:14:52.873 13:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # IFS=: 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # read -r var val 00:14:52.873 13:57:32 -- accel/accel.sh@20 -- # val= 00:14:52.873 13:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # IFS=: 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # read -r var val 00:14:52.873 13:57:32 -- accel/accel.sh@20 -- # val=software 00:14:52.873 13:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.873 13:57:32 -- accel/accel.sh@22 -- # accel_module=software 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # IFS=: 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # read -r var val 00:14:52.873 13:57:32 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:14:52.873 13:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # IFS=: 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # read -r var val 00:14:52.873 13:57:32 -- accel/accel.sh@20 -- # val=32 00:14:52.873 13:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # IFS=: 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # read -r var val 00:14:52.873 13:57:32 -- accel/accel.sh@20 -- # val=32 00:14:52.873 13:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # IFS=: 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # read -r var val 00:14:52.873 13:57:32 -- accel/accel.sh@20 -- # val=2 00:14:52.873 13:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # IFS=: 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # read -r var val 00:14:52.873 13:57:32 -- accel/accel.sh@20 -- # val='1 seconds' 00:14:52.873 13:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # IFS=: 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # read -r var val 00:14:52.873 13:57:32 -- accel/accel.sh@20 -- # val=Yes 00:14:52.873 13:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # IFS=: 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # read -r var val 00:14:52.873 13:57:32 -- accel/accel.sh@20 -- # val= 00:14:52.873 13:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # IFS=: 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # read -r var val 00:14:52.873 13:57:32 -- accel/accel.sh@20 -- # val= 00:14:52.873 13:57:32 -- accel/accel.sh@21 -- # case "$var" in 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # IFS=: 00:14:52.873 13:57:32 -- accel/accel.sh@19 -- # read -r var val 00:14:55.420 13:57:34 -- accel/accel.sh@20 -- # val= 00:14:55.420 13:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:14:55.420 13:57:34 -- accel/accel.sh@19 -- # IFS=: 00:14:55.420 13:57:34 -- accel/accel.sh@19 -- # read -r var val 00:14:55.420 13:57:34 -- accel/accel.sh@20 -- # val= 00:14:55.420 13:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:14:55.420 13:57:34 -- accel/accel.sh@19 -- # IFS=: 00:14:55.420 13:57:34 -- accel/accel.sh@19 -- # read -r var val 00:14:55.420 13:57:34 -- accel/accel.sh@20 -- # val= 00:14:55.420 13:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:14:55.420 13:57:34 -- accel/accel.sh@19 -- # IFS=: 00:14:55.420 13:57:34 -- accel/accel.sh@19 -- # read -r var val 00:14:55.420 13:57:34 -- accel/accel.sh@20 -- # val= 00:14:55.420 13:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:14:55.420 13:57:34 -- accel/accel.sh@19 -- # IFS=: 00:14:55.420 13:57:34 -- accel/accel.sh@19 -- # read -r var val 00:14:55.420 13:57:34 -- accel/accel.sh@20 -- # val= 00:14:55.420 13:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:14:55.420 13:57:34 -- accel/accel.sh@19 -- # IFS=: 00:14:55.420 13:57:34 -- accel/accel.sh@19 -- # read -r var val 00:14:55.420 13:57:34 -- accel/accel.sh@20 -- # val= 00:14:55.420 13:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:14:55.420 13:57:34 -- accel/accel.sh@19 -- # IFS=: 00:14:55.420 13:57:34 -- accel/accel.sh@19 -- # read -r var val 00:14:55.420 13:57:34 -- accel/accel.sh@20 -- # val= 00:14:55.420 13:57:34 -- accel/accel.sh@21 -- # case "$var" in 00:14:55.420 13:57:34 -- accel/accel.sh@19 -- # IFS=: 00:14:55.420 13:57:34 -- accel/accel.sh@19 -- # read -r var val 00:14:55.420 13:57:34 -- accel/accel.sh@27 -- # [[ -n software ]] 00:14:55.420 13:57:34 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:14:55.420 13:57:34 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:14:55.420 00:14:55.420 real 0m2.750s 00:14:55.420 user 0m2.463s 00:14:55.420 sys 0m0.191s 00:14:55.420 13:57:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:55.420 13:57:34 -- common/autotest_common.sh@10 -- # set +x 00:14:55.420 ************************************ 00:14:55.420 END TEST accel_deomp_full_mthread 00:14:55.420 ************************************ 00:14:55.420 13:57:34 -- accel/accel.sh@124 -- # [[ n == y ]] 00:14:55.420 13:57:34 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:14:55.420 13:57:34 -- accel/accel.sh@137 -- # build_accel_config 00:14:55.420 13:57:34 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:55.420 13:57:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:55.420 13:57:34 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:14:55.420 13:57:34 -- common/autotest_common.sh@10 -- # set +x 00:14:55.420 13:57:34 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:14:55.420 13:57:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:14:55.420 13:57:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:14:55.420 13:57:34 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:14:55.420 13:57:34 -- accel/accel.sh@40 -- # local IFS=, 00:14:55.420 13:57:34 -- accel/accel.sh@41 -- # jq -r . 00:14:55.420 ************************************ 00:14:55.420 START TEST accel_dif_functional_tests 00:14:55.420 ************************************ 00:14:55.420 13:57:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:14:55.420 [2024-04-26 13:57:34.744001] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:55.420 [2024-04-26 13:57:34.744122] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65404 ] 00:14:55.420 [2024-04-26 13:57:34.915566] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:55.678 [2024-04-26 13:57:35.155684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.678 [2024-04-26 13:57:35.155773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.678 [2024-04-26 13:57:35.155803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.936 00:14:55.936 00:14:55.936 CUnit - A unit testing framework for C - Version 2.1-3 00:14:55.936 http://cunit.sourceforge.net/ 00:14:55.936 00:14:55.936 00:14:55.936 Suite: accel_dif 00:14:55.936 Test: verify: DIF generated, GUARD check ...passed 00:14:55.936 Test: verify: DIF generated, APPTAG check ...passed 00:14:55.936 Test: verify: DIF generated, REFTAG check ...passed 00:14:55.936 Test: verify: DIF not generated, GUARD check ...passed 00:14:55.936 Test: verify: DIF not generated, APPTAG check ...passed 00:14:55.936 Test: verify: DIF not generated, REFTAG check ...passed 00:14:55.936 Test: verify: APPTAG correct, APPTAG check ...passed 00:14:55.936 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:14:55.936 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:14:55.936 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:14:55.936 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:14:55.936 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:14:55.936 Test: generate copy: DIF generated, GUARD check ...passed 00:14:55.936 Test: generate copy: DIF generated, APTTAG check ...passed 00:14:55.936 Test: generate copy: DIF generated, REFTAG check ...passed 00:14:55.936 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:14:55.936 Test: generate copy: DIF generated, no APPTAG check flag set ...[2024-04-26 13:57:35.547685] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:14:55.936 [2024-04-26 13:57:35.547749] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:14:55.936 [2024-04-26 13:57:35.547796] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:14:55.936 [2024-04-26 13:57:35.547821] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:14:55.936 [2024-04-26 13:57:35.547855] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:14:55.936 [2024-04-26 13:57:35.547886] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:14:55.936 [2024-04-26 13:57:35.547958] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:14:55.936 [2024-04-26 13:57:35.548116] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:14:55.936 passed 00:14:55.936 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:14:55.936 Test: generate copy: iovecs-len validate ...passed 00:14:55.936 Test: generate copy: buffer alignment validate ...passed 00:14:55.936 00:14:55.936 Run Summary: Type Total Ran Passed Failed Inactive 00:14:55.936 suites 1 1 n/a 0 0 00:14:55.936 tests 20 20 20 0 0 00:14:55.936 asserts 204 204 204 0 n/a 00:14:55.936 00:14:55.936 Elapsed time = 0.003 seconds 00:14:55.936 [2024-04-26 13:57:35.548460] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:14:57.311 00:14:57.311 real 0m2.171s 00:14:57.311 user 0m4.259s 00:14:57.311 sys 0m0.238s 00:14:57.311 13:57:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:57.311 13:57:36 -- common/autotest_common.sh@10 -- # set +x 00:14:57.311 ************************************ 00:14:57.311 END TEST accel_dif_functional_tests 00:14:57.311 ************************************ 00:14:57.311 ************************************ 00:14:57.311 END TEST accel 00:14:57.311 ************************************ 00:14:57.311 00:14:57.311 real 1m7.627s 00:14:57.311 user 1m11.830s 00:14:57.311 sys 0m7.205s 00:14:57.311 13:57:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:57.311 13:57:36 -- common/autotest_common.sh@10 -- # set +x 00:14:57.311 13:57:36 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:14:57.311 13:57:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:57.311 13:57:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:57.311 13:57:36 -- common/autotest_common.sh@10 -- # set +x 00:14:57.568 ************************************ 00:14:57.568 START TEST accel_rpc 00:14:57.568 ************************************ 00:14:57.568 13:57:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:14:57.568 * Looking for test storage... 00:14:57.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:14:57.568 13:57:37 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:14:57.568 13:57:37 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=65497 00:14:57.568 13:57:37 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:14:57.568 13:57:37 -- accel/accel_rpc.sh@15 -- # waitforlisten 65497 00:14:57.568 13:57:37 -- common/autotest_common.sh@817 -- # '[' -z 65497 ']' 00:14:57.568 13:57:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.568 13:57:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:57.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.568 13:57:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.568 13:57:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:57.568 13:57:37 -- common/autotest_common.sh@10 -- # set +x 00:14:57.826 [2024-04-26 13:57:37.282511] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:14:57.826 [2024-04-26 13:57:37.282675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65497 ] 00:14:57.826 [2024-04-26 13:57:37.445791] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.084 [2024-04-26 13:57:37.699118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.652 13:57:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:58.652 13:57:38 -- common/autotest_common.sh@850 -- # return 0 00:14:58.652 13:57:38 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:14:58.652 13:57:38 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:14:58.652 13:57:38 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:14:58.652 13:57:38 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:14:58.652 13:57:38 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:14:58.652 13:57:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:58.652 13:57:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:58.652 13:57:38 -- common/autotest_common.sh@10 -- # set +x 00:14:58.652 ************************************ 00:14:58.652 START TEST accel_assign_opcode 00:14:58.652 ************************************ 00:14:58.652 13:57:38 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:14:58.652 13:57:38 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:14:58.652 13:57:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:58.652 13:57:38 -- common/autotest_common.sh@10 -- # set +x 00:14:58.652 [2024-04-26 13:57:38.239204] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:14:58.652 13:57:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:58.652 13:57:38 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:14:58.652 13:57:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:58.652 13:57:38 -- common/autotest_common.sh@10 -- # set +x 00:14:58.652 [2024-04-26 13:57:38.251133] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:14:58.652 13:57:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:58.652 13:57:38 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:14:58.652 13:57:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:58.652 13:57:38 -- common/autotest_common.sh@10 -- # set +x 00:14:59.589 13:57:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:59.589 13:57:39 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:14:59.589 13:57:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:59.589 13:57:39 -- common/autotest_common.sh@10 -- # set +x 00:14:59.589 13:57:39 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:14:59.589 13:57:39 -- accel/accel_rpc.sh@42 -- # grep software 00:14:59.589 13:57:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:59.848 software 00:14:59.848 ************************************ 00:14:59.848 END TEST accel_assign_opcode 00:14:59.848 ************************************ 00:14:59.848 00:14:59.848 real 0m1.055s 00:14:59.848 user 0m0.053s 00:14:59.848 sys 0m0.014s 00:14:59.848 13:57:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:59.848 13:57:39 -- common/autotest_common.sh@10 -- # set +x 00:14:59.848 13:57:39 -- accel/accel_rpc.sh@55 -- # killprocess 65497 00:14:59.848 13:57:39 -- common/autotest_common.sh@936 -- # '[' -z 65497 ']' 00:14:59.848 13:57:39 -- common/autotest_common.sh@940 -- # kill -0 65497 00:14:59.848 13:57:39 -- common/autotest_common.sh@941 -- # uname 00:14:59.848 13:57:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:59.848 13:57:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65497 00:14:59.848 killing process with pid 65497 00:14:59.848 13:57:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:59.848 13:57:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:59.848 13:57:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65497' 00:14:59.848 13:57:39 -- common/autotest_common.sh@955 -- # kill 65497 00:14:59.848 13:57:39 -- common/autotest_common.sh@960 -- # wait 65497 00:15:02.382 ************************************ 00:15:02.382 END TEST accel_rpc 00:15:02.382 ************************************ 00:15:02.382 00:15:02.382 real 0m4.900s 00:15:02.382 user 0m4.803s 00:15:02.382 sys 0m0.639s 00:15:02.382 13:57:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:02.382 13:57:41 -- common/autotest_common.sh@10 -- # set +x 00:15:02.382 13:57:41 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:15:02.382 13:57:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:02.382 13:57:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:02.382 13:57:41 -- common/autotest_common.sh@10 -- # set +x 00:15:02.641 ************************************ 00:15:02.641 START TEST app_cmdline 00:15:02.641 ************************************ 00:15:02.641 13:57:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:15:02.641 * Looking for test storage... 00:15:02.641 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:15:02.641 13:57:42 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:15:02.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.641 13:57:42 -- app/cmdline.sh@17 -- # spdk_tgt_pid=65652 00:15:02.641 13:57:42 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:15:02.641 13:57:42 -- app/cmdline.sh@18 -- # waitforlisten 65652 00:15:02.641 13:57:42 -- common/autotest_common.sh@817 -- # '[' -z 65652 ']' 00:15:02.641 13:57:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.641 13:57:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:02.641 13:57:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.641 13:57:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:02.641 13:57:42 -- common/autotest_common.sh@10 -- # set +x 00:15:02.641 [2024-04-26 13:57:42.309499] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:02.641 [2024-04-26 13:57:42.309622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65652 ] 00:15:02.900 [2024-04-26 13:57:42.473663] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.158 [2024-04-26 13:57:42.705626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.093 13:57:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:04.093 13:57:43 -- common/autotest_common.sh@850 -- # return 0 00:15:04.093 13:57:43 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:15:04.352 { 00:15:04.352 "fields": { 00:15:04.352 "commit": "8571999d8", 00:15:04.352 "major": 24, 00:15:04.352 "minor": 5, 00:15:04.352 "patch": 0, 00:15:04.352 "suffix": "-pre" 00:15:04.352 }, 00:15:04.352 "version": "SPDK v24.05-pre git sha1 8571999d8" 00:15:04.352 } 00:15:04.352 13:57:43 -- app/cmdline.sh@22 -- # expected_methods=() 00:15:04.352 13:57:43 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:15:04.352 13:57:43 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:15:04.352 13:57:43 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:15:04.352 13:57:43 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:15:04.352 13:57:43 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:15:04.352 13:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:04.352 13:57:43 -- common/autotest_common.sh@10 -- # set +x 00:15:04.352 13:57:43 -- app/cmdline.sh@26 -- # sort 00:15:04.352 13:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:04.352 13:57:43 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:15:04.352 13:57:43 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:15:04.352 13:57:43 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:04.352 13:57:43 -- common/autotest_common.sh@638 -- # local es=0 00:15:04.352 13:57:43 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:04.352 13:57:43 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:04.352 13:57:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:04.352 13:57:43 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:04.352 13:57:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:04.352 13:57:43 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:04.352 13:57:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:04.352 13:57:43 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:04.352 13:57:43 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:04.352 13:57:43 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:04.653 2024/04/26 13:57:44 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:15:04.654 request: 00:15:04.654 { 00:15:04.654 "method": "env_dpdk_get_mem_stats", 00:15:04.654 "params": {} 00:15:04.654 } 00:15:04.654 Got JSON-RPC error response 00:15:04.654 GoRPCClient: error on JSON-RPC call 00:15:04.654 13:57:44 -- common/autotest_common.sh@641 -- # es=1 00:15:04.654 13:57:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:04.654 13:57:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:04.654 13:57:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:04.654 13:57:44 -- app/cmdline.sh@1 -- # killprocess 65652 00:15:04.654 13:57:44 -- common/autotest_common.sh@936 -- # '[' -z 65652 ']' 00:15:04.654 13:57:44 -- common/autotest_common.sh@940 -- # kill -0 65652 00:15:04.654 13:57:44 -- common/autotest_common.sh@941 -- # uname 00:15:04.654 13:57:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:04.654 13:57:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65652 00:15:04.654 13:57:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:04.654 13:57:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:04.654 killing process with pid 65652 00:15:04.654 13:57:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65652' 00:15:04.654 13:57:44 -- common/autotest_common.sh@955 -- # kill 65652 00:15:04.654 13:57:44 -- common/autotest_common.sh@960 -- # wait 65652 00:15:07.229 00:15:07.229 real 0m4.451s 00:15:07.229 user 0m4.629s 00:15:07.229 sys 0m0.596s 00:15:07.229 13:57:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:07.229 13:57:46 -- common/autotest_common.sh@10 -- # set +x 00:15:07.229 ************************************ 00:15:07.229 END TEST app_cmdline 00:15:07.229 ************************************ 00:15:07.229 13:57:46 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:15:07.229 13:57:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:07.229 13:57:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:07.229 13:57:46 -- common/autotest_common.sh@10 -- # set +x 00:15:07.229 ************************************ 00:15:07.229 START TEST version 00:15:07.229 ************************************ 00:15:07.229 13:57:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:15:07.229 * Looking for test storage... 00:15:07.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:15:07.229 13:57:46 -- app/version.sh@17 -- # get_header_version major 00:15:07.229 13:57:46 -- app/version.sh@14 -- # cut -f2 00:15:07.229 13:57:46 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:07.229 13:57:46 -- app/version.sh@14 -- # tr -d '"' 00:15:07.229 13:57:46 -- app/version.sh@17 -- # major=24 00:15:07.229 13:57:46 -- app/version.sh@18 -- # get_header_version minor 00:15:07.229 13:57:46 -- app/version.sh@14 -- # cut -f2 00:15:07.229 13:57:46 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:07.229 13:57:46 -- app/version.sh@14 -- # tr -d '"' 00:15:07.229 13:57:46 -- app/version.sh@18 -- # minor=5 00:15:07.229 13:57:46 -- app/version.sh@19 -- # get_header_version patch 00:15:07.229 13:57:46 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:07.229 13:57:46 -- app/version.sh@14 -- # cut -f2 00:15:07.229 13:57:46 -- app/version.sh@14 -- # tr -d '"' 00:15:07.229 13:57:46 -- app/version.sh@19 -- # patch=0 00:15:07.229 13:57:46 -- app/version.sh@20 -- # get_header_version suffix 00:15:07.229 13:57:46 -- app/version.sh@14 -- # cut -f2 00:15:07.229 13:57:46 -- app/version.sh@14 -- # tr -d '"' 00:15:07.229 13:57:46 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:07.229 13:57:46 -- app/version.sh@20 -- # suffix=-pre 00:15:07.229 13:57:46 -- app/version.sh@22 -- # version=24.5 00:15:07.229 13:57:46 -- app/version.sh@25 -- # (( patch != 0 )) 00:15:07.229 13:57:46 -- app/version.sh@28 -- # version=24.5rc0 00:15:07.229 13:57:46 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:07.229 13:57:46 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:15:07.229 13:57:46 -- app/version.sh@30 -- # py_version=24.5rc0 00:15:07.229 13:57:46 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:15:07.229 00:15:07.229 real 0m0.217s 00:15:07.229 user 0m0.117s 00:15:07.229 sys 0m0.146s 00:15:07.229 13:57:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:07.229 ************************************ 00:15:07.229 END TEST version 00:15:07.229 ************************************ 00:15:07.229 13:57:46 -- common/autotest_common.sh@10 -- # set +x 00:15:07.488 13:57:46 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:15:07.488 13:57:46 -- spdk/autotest.sh@194 -- # uname -s 00:15:07.488 13:57:46 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:15:07.488 13:57:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:15:07.488 13:57:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:15:07.488 13:57:46 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:15:07.488 13:57:46 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:15:07.488 13:57:46 -- spdk/autotest.sh@258 -- # timing_exit lib 00:15:07.488 13:57:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:07.488 13:57:46 -- common/autotest_common.sh@10 -- # set +x 00:15:07.488 13:57:47 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:15:07.488 13:57:47 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:15:07.488 13:57:47 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:15:07.488 13:57:47 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:15:07.488 13:57:47 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:15:07.488 13:57:47 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:15:07.488 13:57:47 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:15:07.488 13:57:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:07.488 13:57:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:07.488 13:57:47 -- common/autotest_common.sh@10 -- # set +x 00:15:07.488 ************************************ 00:15:07.488 START TEST nvmf_tcp 00:15:07.488 ************************************ 00:15:07.488 13:57:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:15:07.747 * Looking for test storage... 00:15:07.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:15:07.747 13:57:47 -- nvmf/nvmf.sh@10 -- # uname -s 00:15:07.747 13:57:47 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:15:07.747 13:57:47 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:07.747 13:57:47 -- nvmf/common.sh@7 -- # uname -s 00:15:07.747 13:57:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.747 13:57:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.747 13:57:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.747 13:57:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.747 13:57:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.747 13:57:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.747 13:57:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.747 13:57:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.747 13:57:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.747 13:57:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.747 13:57:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:15:07.747 13:57:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:15:07.747 13:57:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.747 13:57:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.747 13:57:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:07.747 13:57:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.747 13:57:47 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:07.747 13:57:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.747 13:57:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.747 13:57:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.747 13:57:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.747 13:57:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.747 13:57:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.747 13:57:47 -- paths/export.sh@5 -- # export PATH 00:15:07.747 13:57:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.747 13:57:47 -- nvmf/common.sh@47 -- # : 0 00:15:07.747 13:57:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:07.747 13:57:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:07.747 13:57:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:07.747 13:57:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.747 13:57:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.747 13:57:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:07.747 13:57:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:07.747 13:57:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:07.747 13:57:47 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:07.747 13:57:47 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:15:07.747 13:57:47 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:15:07.747 13:57:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:07.747 13:57:47 -- common/autotest_common.sh@10 -- # set +x 00:15:07.747 13:57:47 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:15:07.747 13:57:47 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:07.747 13:57:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:07.747 13:57:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:07.747 13:57:47 -- common/autotest_common.sh@10 -- # set +x 00:15:07.747 ************************************ 00:15:07.747 START TEST nvmf_example 00:15:07.747 ************************************ 00:15:07.747 13:57:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:08.006 * Looking for test storage... 00:15:08.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:08.006 13:57:47 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:08.006 13:57:47 -- nvmf/common.sh@7 -- # uname -s 00:15:08.006 13:57:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.006 13:57:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.006 13:57:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.006 13:57:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.006 13:57:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.006 13:57:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.006 13:57:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.006 13:57:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.006 13:57:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.006 13:57:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.006 13:57:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:15:08.006 13:57:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:15:08.006 13:57:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.006 13:57:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.006 13:57:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:08.006 13:57:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.006 13:57:47 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:08.006 13:57:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.006 13:57:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.006 13:57:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.006 13:57:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.006 13:57:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.006 13:57:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.006 13:57:47 -- paths/export.sh@5 -- # export PATH 00:15:08.006 13:57:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.006 13:57:47 -- nvmf/common.sh@47 -- # : 0 00:15:08.006 13:57:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:08.006 13:57:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:08.006 13:57:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.006 13:57:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.006 13:57:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.006 13:57:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:08.006 13:57:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:08.006 13:57:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:08.006 13:57:47 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:15:08.006 13:57:47 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:15:08.006 13:57:47 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:15:08.006 13:57:47 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:15:08.006 13:57:47 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:15:08.006 13:57:47 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:15:08.006 13:57:47 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:15:08.006 13:57:47 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:15:08.006 13:57:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:08.006 13:57:47 -- common/autotest_common.sh@10 -- # set +x 00:15:08.006 13:57:47 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:15:08.006 13:57:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:08.006 13:57:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.006 13:57:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:08.006 13:57:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:08.006 13:57:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:08.006 13:57:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.006 13:57:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.006 13:57:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.006 13:57:47 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:08.006 13:57:47 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:08.006 13:57:47 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:08.006 13:57:47 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:08.006 13:57:47 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:08.006 13:57:47 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:08.006 13:57:47 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.006 13:57:47 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:08.006 13:57:47 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:08.006 13:57:47 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:08.006 13:57:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:08.006 13:57:47 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:08.006 13:57:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:08.006 13:57:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.006 13:57:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:08.006 13:57:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:08.007 13:57:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:08.007 13:57:47 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:08.007 13:57:47 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:08.007 Cannot find device "nvmf_init_br" 00:15:08.007 13:57:47 -- nvmf/common.sh@154 -- # true 00:15:08.007 13:57:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:08.007 Cannot find device "nvmf_tgt_br" 00:15:08.007 13:57:47 -- nvmf/common.sh@155 -- # true 00:15:08.007 13:57:47 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:08.007 Cannot find device "nvmf_tgt_br2" 00:15:08.007 13:57:47 -- nvmf/common.sh@156 -- # true 00:15:08.007 13:57:47 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:08.007 Cannot find device "nvmf_init_br" 00:15:08.007 13:57:47 -- nvmf/common.sh@157 -- # true 00:15:08.007 13:57:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:08.007 Cannot find device "nvmf_tgt_br" 00:15:08.007 13:57:47 -- nvmf/common.sh@158 -- # true 00:15:08.007 13:57:47 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:08.265 Cannot find device "nvmf_tgt_br2" 00:15:08.265 13:57:47 -- nvmf/common.sh@159 -- # true 00:15:08.265 13:57:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:08.265 Cannot find device "nvmf_br" 00:15:08.265 13:57:47 -- nvmf/common.sh@160 -- # true 00:15:08.265 13:57:47 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:08.265 Cannot find device "nvmf_init_if" 00:15:08.265 13:57:47 -- nvmf/common.sh@161 -- # true 00:15:08.265 13:57:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:08.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.265 13:57:47 -- nvmf/common.sh@162 -- # true 00:15:08.265 13:57:47 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:08.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:08.265 13:57:47 -- nvmf/common.sh@163 -- # true 00:15:08.265 13:57:47 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:08.265 13:57:47 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:08.265 13:57:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:08.265 13:57:47 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:08.265 13:57:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:08.265 13:57:47 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:08.265 13:57:47 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:08.265 13:57:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:08.265 13:57:47 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:08.265 13:57:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:08.265 13:57:47 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:08.265 13:57:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:08.265 13:57:47 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:08.265 13:57:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:08.265 13:57:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:08.265 13:57:47 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:08.265 13:57:47 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:08.265 13:57:47 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:08.266 13:57:47 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:08.525 13:57:47 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:08.525 13:57:47 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:08.525 13:57:47 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:08.525 13:57:48 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:08.525 13:57:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:08.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:15:08.525 00:15:08.525 --- 10.0.0.2 ping statistics --- 00:15:08.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.525 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:15:08.525 13:57:48 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:08.525 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:08.525 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:15:08.525 00:15:08.525 --- 10.0.0.3 ping statistics --- 00:15:08.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.525 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:15:08.525 13:57:48 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:08.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:15:08.525 00:15:08.525 --- 10.0.0.1 ping statistics --- 00:15:08.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.525 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:08.525 13:57:48 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.525 13:57:48 -- nvmf/common.sh@422 -- # return 0 00:15:08.525 13:57:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:08.525 13:57:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.525 13:57:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:08.525 13:57:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:08.525 13:57:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.525 13:57:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:08.525 13:57:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:08.525 13:57:48 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:15:08.525 13:57:48 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:15:08.525 13:57:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:08.525 13:57:48 -- common/autotest_common.sh@10 -- # set +x 00:15:08.525 13:57:48 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:15:08.525 13:57:48 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:15:08.525 13:57:48 -- target/nvmf_example.sh@34 -- # nvmfpid=66044 00:15:08.525 13:57:48 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:15:08.525 13:57:48 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:08.525 13:57:48 -- target/nvmf_example.sh@36 -- # waitforlisten 66044 00:15:08.525 13:57:48 -- common/autotest_common.sh@817 -- # '[' -z 66044 ']' 00:15:08.525 13:57:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.525 13:57:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:08.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.525 13:57:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.525 13:57:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:08.525 13:57:48 -- common/autotest_common.sh@10 -- # set +x 00:15:09.462 13:57:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:09.462 13:57:48 -- common/autotest_common.sh@850 -- # return 0 00:15:09.462 13:57:48 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:15:09.462 13:57:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:09.462 13:57:48 -- common/autotest_common.sh@10 -- # set +x 00:15:09.462 13:57:49 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:09.462 13:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.462 13:57:49 -- common/autotest_common.sh@10 -- # set +x 00:15:09.462 13:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.462 13:57:49 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:15:09.462 13:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.462 13:57:49 -- common/autotest_common.sh@10 -- # set +x 00:15:09.721 13:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.721 13:57:49 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:15:09.721 13:57:49 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:09.721 13:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.721 13:57:49 -- common/autotest_common.sh@10 -- # set +x 00:15:09.721 13:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.721 13:57:49 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:15:09.721 13:57:49 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:09.721 13:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.721 13:57:49 -- common/autotest_common.sh@10 -- # set +x 00:15:09.721 13:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.721 13:57:49 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:09.721 13:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.721 13:57:49 -- common/autotest_common.sh@10 -- # set +x 00:15:09.721 13:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.721 13:57:49 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:09.721 13:57:49 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:21.957 Initializing NVMe Controllers 00:15:21.957 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:21.957 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:21.957 Initialization complete. Launching workers. 00:15:21.957 ======================================================== 00:15:21.957 Latency(us) 00:15:21.957 Device Information : IOPS MiB/s Average min max 00:15:21.957 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15161.88 59.23 4220.82 771.14 23172.02 00:15:21.957 ======================================================== 00:15:21.957 Total : 15161.88 59.23 4220.82 771.14 23172.02 00:15:21.957 00:15:21.957 13:57:59 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:15:21.957 13:57:59 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:15:21.957 13:57:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:21.957 13:57:59 -- nvmf/common.sh@117 -- # sync 00:15:21.957 13:57:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:21.957 13:57:59 -- nvmf/common.sh@120 -- # set +e 00:15:21.957 13:57:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:21.957 13:57:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:21.957 rmmod nvme_tcp 00:15:21.957 rmmod nvme_fabrics 00:15:21.957 rmmod nvme_keyring 00:15:21.957 13:57:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:21.957 13:57:59 -- nvmf/common.sh@124 -- # set -e 00:15:21.957 13:57:59 -- nvmf/common.sh@125 -- # return 0 00:15:21.957 13:57:59 -- nvmf/common.sh@478 -- # '[' -n 66044 ']' 00:15:21.957 13:57:59 -- nvmf/common.sh@479 -- # killprocess 66044 00:15:21.957 13:57:59 -- common/autotest_common.sh@936 -- # '[' -z 66044 ']' 00:15:21.957 13:57:59 -- common/autotest_common.sh@940 -- # kill -0 66044 00:15:21.957 13:57:59 -- common/autotest_common.sh@941 -- # uname 00:15:21.957 13:57:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:21.957 13:57:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66044 00:15:21.957 13:57:59 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:15:21.957 killing process with pid 66044 00:15:21.957 13:57:59 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:15:21.957 13:57:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66044' 00:15:21.957 13:57:59 -- common/autotest_common.sh@955 -- # kill 66044 00:15:21.958 13:57:59 -- common/autotest_common.sh@960 -- # wait 66044 00:15:21.958 nvmf threads initialize successfully 00:15:21.958 bdev subsystem init successfully 00:15:21.958 created a nvmf target service 00:15:21.958 create targets's poll groups done 00:15:21.958 all subsystems of target started 00:15:21.958 nvmf target is running 00:15:21.958 all subsystems of target stopped 00:15:21.958 destroy targets's poll groups done 00:15:21.958 destroyed the nvmf target service 00:15:21.958 bdev subsystem finish successfully 00:15:21.958 nvmf threads destroy successfully 00:15:21.958 13:58:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:21.958 13:58:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:21.958 13:58:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:21.958 13:58:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:21.958 13:58:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:21.958 13:58:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.958 13:58:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.958 13:58:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.958 13:58:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:21.958 13:58:01 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:15:21.958 13:58:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:21.958 13:58:01 -- common/autotest_common.sh@10 -- # set +x 00:15:21.958 00:15:21.958 real 0m13.716s 00:15:21.958 user 0m47.758s 00:15:21.958 sys 0m2.548s 00:15:21.958 13:58:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:21.958 13:58:01 -- common/autotest_common.sh@10 -- # set +x 00:15:21.958 ************************************ 00:15:21.958 END TEST nvmf_example 00:15:21.958 ************************************ 00:15:21.958 13:58:01 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:21.958 13:58:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:21.958 13:58:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:21.958 13:58:01 -- common/autotest_common.sh@10 -- # set +x 00:15:21.958 ************************************ 00:15:21.958 START TEST nvmf_filesystem 00:15:21.958 ************************************ 00:15:21.958 13:58:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:21.958 * Looking for test storage... 00:15:21.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:21.958 13:58:01 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:15:21.958 13:58:01 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:21.958 13:58:01 -- common/autotest_common.sh@34 -- # set -e 00:15:21.958 13:58:01 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:21.958 13:58:01 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:21.958 13:58:01 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:15:21.958 13:58:01 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:15:21.958 13:58:01 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:15:21.958 13:58:01 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:21.958 13:58:01 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:15:21.958 13:58:01 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:21.958 13:58:01 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:21.958 13:58:01 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:15:21.958 13:58:01 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:21.958 13:58:01 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:21.958 13:58:01 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:21.958 13:58:01 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:21.958 13:58:01 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:21.958 13:58:01 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:21.958 13:58:01 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:21.958 13:58:01 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:21.958 13:58:01 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:21.958 13:58:01 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:21.958 13:58:01 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:21.958 13:58:01 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:15:21.958 13:58:01 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:21.958 13:58:01 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:21.958 13:58:01 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:15:21.958 13:58:01 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:15:21.958 13:58:01 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:15:21.958 13:58:01 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:21.958 13:58:01 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:15:21.958 13:58:01 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:15:21.958 13:58:01 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:21.958 13:58:01 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:21.958 13:58:01 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:15:21.958 13:58:01 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:15:21.958 13:58:01 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:15:21.958 13:58:01 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:15:21.958 13:58:01 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:15:21.958 13:58:01 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:15:21.958 13:58:01 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:15:21.958 13:58:01 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:15:21.958 13:58:01 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:15:21.958 13:58:01 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:15:21.958 13:58:01 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:15:21.958 13:58:01 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:15:21.958 13:58:01 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:15:21.958 13:58:01 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:15:21.958 13:58:01 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:15:21.958 13:58:01 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:15:21.958 13:58:01 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:21.958 13:58:01 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:15:21.958 13:58:01 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:15:21.958 13:58:01 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:15:21.958 13:58:01 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:21.958 13:58:01 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:15:21.958 13:58:01 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:15:21.958 13:58:01 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:15:21.958 13:58:01 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:15:21.958 13:58:01 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:15:21.958 13:58:01 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:15:21.958 13:58:01 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:15:21.958 13:58:01 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:15:21.958 13:58:01 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:15:21.958 13:58:01 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:15:21.958 13:58:01 -- common/build_config.sh@59 -- # CONFIG_GOLANG=y 00:15:21.958 13:58:01 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:15:21.958 13:58:01 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:15:21.958 13:58:01 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:15:21.958 13:58:01 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:15:21.958 13:58:01 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:15:21.958 13:58:01 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:15:21.958 13:58:01 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:15:21.958 13:58:01 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:15:21.958 13:58:01 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:21.958 13:58:01 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:15:21.958 13:58:01 -- common/build_config.sh@70 -- # CONFIG_AVAHI=y 00:15:21.958 13:58:01 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:15:21.958 13:58:01 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:15:21.958 13:58:01 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:15:21.958 13:58:01 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:15:21.958 13:58:01 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:15:21.958 13:58:01 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:15:21.958 13:58:01 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:15:21.958 13:58:01 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:15:21.958 13:58:01 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:15:21.958 13:58:01 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:21.958 13:58:01 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:15:21.958 13:58:01 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:15:21.958 13:58:01 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:21.958 13:58:01 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:21.958 13:58:01 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:15:21.958 13:58:01 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:15:21.958 13:58:01 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:15:21.958 13:58:01 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:15:21.958 13:58:01 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:15:21.958 13:58:01 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:15:21.958 13:58:01 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:21.958 13:58:01 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:21.958 13:58:01 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:21.958 13:58:01 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:21.958 13:58:01 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:21.958 13:58:01 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:21.958 13:58:01 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:15:21.958 13:58:01 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:21.958 #define SPDK_CONFIG_H 00:15:21.958 #define SPDK_CONFIG_APPS 1 00:15:21.958 #define SPDK_CONFIG_ARCH native 00:15:21.958 #define SPDK_CONFIG_ASAN 1 00:15:21.959 #define SPDK_CONFIG_AVAHI 1 00:15:21.959 #undef SPDK_CONFIG_CET 00:15:21.959 #define SPDK_CONFIG_COVERAGE 1 00:15:21.959 #define SPDK_CONFIG_CROSS_PREFIX 00:15:21.959 #undef SPDK_CONFIG_CRYPTO 00:15:21.959 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:21.959 #undef SPDK_CONFIG_CUSTOMOCF 00:15:21.959 #undef SPDK_CONFIG_DAOS 00:15:21.959 #define SPDK_CONFIG_DAOS_DIR 00:15:21.959 #define SPDK_CONFIG_DEBUG 1 00:15:21.959 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:21.959 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:15:21.959 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:21.959 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:21.959 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:21.959 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:21.959 #define SPDK_CONFIG_EXAMPLES 1 00:15:21.959 #undef SPDK_CONFIG_FC 00:15:21.959 #define SPDK_CONFIG_FC_PATH 00:15:21.959 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:21.959 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:21.959 #undef SPDK_CONFIG_FUSE 00:15:21.959 #undef SPDK_CONFIG_FUZZER 00:15:21.959 #define SPDK_CONFIG_FUZZER_LIB 00:15:21.959 #define SPDK_CONFIG_GOLANG 1 00:15:21.959 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:21.959 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:21.959 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:21.959 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:15:21.959 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:21.959 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:21.959 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:21.959 #define SPDK_CONFIG_IDXD 1 00:15:21.959 #undef SPDK_CONFIG_IDXD_KERNEL 00:15:21.959 #undef SPDK_CONFIG_IPSEC_MB 00:15:21.959 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:21.959 #define SPDK_CONFIG_ISAL 1 00:15:21.959 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:21.959 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:21.959 #define SPDK_CONFIG_LIBDIR 00:15:21.959 #undef SPDK_CONFIG_LTO 00:15:21.959 #define SPDK_CONFIG_MAX_LCORES 00:15:21.959 #define SPDK_CONFIG_NVME_CUSE 1 00:15:21.959 #undef SPDK_CONFIG_OCF 00:15:21.959 #define SPDK_CONFIG_OCF_PATH 00:15:21.959 #define SPDK_CONFIG_OPENSSL_PATH 00:15:21.959 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:21.959 #define SPDK_CONFIG_PGO_DIR 00:15:21.959 #undef SPDK_CONFIG_PGO_USE 00:15:21.959 #define SPDK_CONFIG_PREFIX /usr/local 00:15:21.959 #undef SPDK_CONFIG_RAID5F 00:15:21.959 #undef SPDK_CONFIG_RBD 00:15:21.959 #define SPDK_CONFIG_RDMA 1 00:15:21.959 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:21.959 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:21.959 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:21.959 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:21.959 #define SPDK_CONFIG_SHARED 1 00:15:21.959 #undef SPDK_CONFIG_SMA 00:15:21.959 #define SPDK_CONFIG_TESTS 1 00:15:21.959 #undef SPDK_CONFIG_TSAN 00:15:21.959 #define SPDK_CONFIG_UBLK 1 00:15:21.959 #define SPDK_CONFIG_UBSAN 1 00:15:21.959 #undef SPDK_CONFIG_UNIT_TESTS 00:15:21.959 #undef SPDK_CONFIG_URING 00:15:21.959 #define SPDK_CONFIG_URING_PATH 00:15:21.959 #undef SPDK_CONFIG_URING_ZNS 00:15:21.959 #define SPDK_CONFIG_USDT 1 00:15:21.959 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:21.959 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:21.959 #undef SPDK_CONFIG_VFIO_USER 00:15:21.959 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:21.959 #define SPDK_CONFIG_VHOST 1 00:15:21.959 #define SPDK_CONFIG_VIRTIO 1 00:15:21.959 #undef SPDK_CONFIG_VTUNE 00:15:21.959 #define SPDK_CONFIG_VTUNE_DIR 00:15:21.959 #define SPDK_CONFIG_WERROR 1 00:15:21.959 #define SPDK_CONFIG_WPDK_DIR 00:15:21.959 #undef SPDK_CONFIG_XNVME 00:15:21.959 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:21.959 13:58:01 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:21.959 13:58:01 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:21.959 13:58:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.959 13:58:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.959 13:58:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.959 13:58:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.959 13:58:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.959 13:58:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.959 13:58:01 -- paths/export.sh@5 -- # export PATH 00:15:21.959 13:58:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.959 13:58:01 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:21.959 13:58:01 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:21.959 13:58:01 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:21.959 13:58:01 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:21.959 13:58:01 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:15:21.959 13:58:01 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:15:21.959 13:58:01 -- pm/common@67 -- # TEST_TAG=N/A 00:15:21.959 13:58:01 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:15:21.959 13:58:01 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:15:21.959 13:58:01 -- pm/common@71 -- # uname -s 00:15:21.959 13:58:01 -- pm/common@71 -- # PM_OS=Linux 00:15:21.959 13:58:01 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:21.959 13:58:01 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:15:21.959 13:58:01 -- pm/common@76 -- # [[ Linux == Linux ]] 00:15:21.959 13:58:01 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:15:21.959 13:58:01 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:15:21.959 13:58:01 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:15:21.959 13:58:01 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:15:21.959 13:58:01 -- common/autotest_common.sh@57 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:15:21.959 13:58:01 -- common/autotest_common.sh@61 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:21.959 13:58:01 -- common/autotest_common.sh@63 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:15:21.959 13:58:01 -- common/autotest_common.sh@65 -- # : 1 00:15:21.959 13:58:01 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:21.959 13:58:01 -- common/autotest_common.sh@67 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:15:21.959 13:58:01 -- common/autotest_common.sh@69 -- # : 00:15:21.959 13:58:01 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:15:21.959 13:58:01 -- common/autotest_common.sh@71 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:15:21.959 13:58:01 -- common/autotest_common.sh@73 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:15:21.959 13:58:01 -- common/autotest_common.sh@75 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:15:21.959 13:58:01 -- common/autotest_common.sh@77 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:21.959 13:58:01 -- common/autotest_common.sh@79 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:15:21.959 13:58:01 -- common/autotest_common.sh@81 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:15:21.959 13:58:01 -- common/autotest_common.sh@83 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:15:21.959 13:58:01 -- common/autotest_common.sh@85 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:15:21.959 13:58:01 -- common/autotest_common.sh@87 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:15:21.959 13:58:01 -- common/autotest_common.sh@89 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:15:21.959 13:58:01 -- common/autotest_common.sh@91 -- # : 1 00:15:21.959 13:58:01 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:15:21.959 13:58:01 -- common/autotest_common.sh@93 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:15:21.959 13:58:01 -- common/autotest_common.sh@95 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:21.959 13:58:01 -- common/autotest_common.sh@97 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:15:21.959 13:58:01 -- common/autotest_common.sh@99 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:15:21.959 13:58:01 -- common/autotest_common.sh@101 -- # : tcp 00:15:21.959 13:58:01 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:21.959 13:58:01 -- common/autotest_common.sh@103 -- # : 0 00:15:21.959 13:58:01 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:15:21.959 13:58:01 -- common/autotest_common.sh@105 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:15:21.960 13:58:01 -- common/autotest_common.sh@107 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:15:21.960 13:58:01 -- common/autotest_common.sh@109 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:15:21.960 13:58:01 -- common/autotest_common.sh@111 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:15:21.960 13:58:01 -- common/autotest_common.sh@113 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:15:21.960 13:58:01 -- common/autotest_common.sh@115 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:15:21.960 13:58:01 -- common/autotest_common.sh@117 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:21.960 13:58:01 -- common/autotest_common.sh@119 -- # : 1 00:15:21.960 13:58:01 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:15:21.960 13:58:01 -- common/autotest_common.sh@121 -- # : 1 00:15:21.960 13:58:01 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:15:21.960 13:58:01 -- common/autotest_common.sh@123 -- # : 00:15:21.960 13:58:01 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:21.960 13:58:01 -- common/autotest_common.sh@125 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:15:21.960 13:58:01 -- common/autotest_common.sh@127 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:15:21.960 13:58:01 -- common/autotest_common.sh@129 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:15:21.960 13:58:01 -- common/autotest_common.sh@131 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:15:21.960 13:58:01 -- common/autotest_common.sh@133 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:15:21.960 13:58:01 -- common/autotest_common.sh@135 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:15:21.960 13:58:01 -- common/autotest_common.sh@137 -- # : 00:15:21.960 13:58:01 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:15:21.960 13:58:01 -- common/autotest_common.sh@139 -- # : true 00:15:21.960 13:58:01 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:15:21.960 13:58:01 -- common/autotest_common.sh@141 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:15:21.960 13:58:01 -- common/autotest_common.sh@143 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:15:21.960 13:58:01 -- common/autotest_common.sh@145 -- # : 1 00:15:21.960 13:58:01 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:15:21.960 13:58:01 -- common/autotest_common.sh@147 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:15:21.960 13:58:01 -- common/autotest_common.sh@149 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:15:21.960 13:58:01 -- common/autotest_common.sh@151 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:15:21.960 13:58:01 -- common/autotest_common.sh@153 -- # : 00:15:21.960 13:58:01 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:15:21.960 13:58:01 -- common/autotest_common.sh@155 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:15:21.960 13:58:01 -- common/autotest_common.sh@157 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:15:21.960 13:58:01 -- common/autotest_common.sh@159 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:15:21.960 13:58:01 -- common/autotest_common.sh@161 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:15:21.960 13:58:01 -- common/autotest_common.sh@163 -- # : 0 00:15:21.960 13:58:01 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:15:21.960 13:58:01 -- common/autotest_common.sh@166 -- # : 00:15:21.960 13:58:01 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:15:21.960 13:58:01 -- common/autotest_common.sh@168 -- # : 1 00:15:21.960 13:58:01 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:15:21.960 13:58:01 -- common/autotest_common.sh@170 -- # : 1 00:15:21.960 13:58:01 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:21.960 13:58:01 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:21.960 13:58:01 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:21.960 13:58:01 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:15:21.960 13:58:01 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:15:21.960 13:58:01 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:21.960 13:58:01 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:21.960 13:58:01 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:21.960 13:58:01 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:21.960 13:58:01 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:21.960 13:58:01 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:21.960 13:58:01 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:21.960 13:58:01 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:21.960 13:58:01 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:21.960 13:58:01 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:15:21.960 13:58:01 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:21.960 13:58:01 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:21.960 13:58:01 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:21.960 13:58:01 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:21.960 13:58:01 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:21.960 13:58:01 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:15:21.960 13:58:01 -- common/autotest_common.sh@199 -- # cat 00:15:21.960 13:58:01 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:15:21.960 13:58:01 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:21.960 13:58:01 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:21.960 13:58:01 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:21.960 13:58:01 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:21.960 13:58:01 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:15:21.960 13:58:01 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:15:21.960 13:58:01 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:21.960 13:58:01 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:21.960 13:58:01 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:21.960 13:58:01 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:21.960 13:58:01 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:21.960 13:58:01 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:21.960 13:58:01 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:21.960 13:58:01 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:21.960 13:58:01 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:21.960 13:58:01 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:21.960 13:58:01 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:21.960 13:58:01 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:21.960 13:58:01 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:15:21.960 13:58:01 -- common/autotest_common.sh@252 -- # export valgrind= 00:15:21.960 13:58:01 -- common/autotest_common.sh@252 -- # valgrind= 00:15:21.960 13:58:01 -- common/autotest_common.sh@258 -- # uname -s 00:15:21.960 13:58:01 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:15:21.960 13:58:01 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:15:21.960 13:58:01 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:15:21.960 13:58:01 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:15:21.960 13:58:01 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:15:21.960 13:58:01 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:15:21.960 13:58:01 -- common/autotest_common.sh@268 -- # MAKE=make 00:15:21.960 13:58:01 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:15:21.960 13:58:01 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:15:21.960 13:58:01 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:15:21.960 13:58:01 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:15:21.960 13:58:01 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:15:21.960 13:58:01 -- common/autotest_common.sh@289 -- # for i in "$@" 00:15:21.961 13:58:01 -- common/autotest_common.sh@290 -- # case "$i" in 00:15:21.961 13:58:01 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:15:21.961 13:58:01 -- common/autotest_common.sh@307 -- # [[ -z 66309 ]] 00:15:21.961 13:58:01 -- common/autotest_common.sh@307 -- # kill -0 66309 00:15:21.961 13:58:01 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:15:21.961 13:58:01 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:15:21.961 13:58:01 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:15:21.961 13:58:01 -- common/autotest_common.sh@320 -- # local mount target_dir 00:15:21.961 13:58:01 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:15:21.961 13:58:01 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:15:21.961 13:58:01 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:15:21.961 13:58:01 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:15:21.961 13:58:01 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.2i8N8o 00:15:21.961 13:58:01 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:21.961 13:58:01 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:15:21.961 13:58:01 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:15:21.961 13:58:01 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.2i8N8o/tests/target /tmp/spdk.2i8N8o 00:15:21.961 13:58:01 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:15:21.961 13:58:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:21.961 13:58:01 -- common/autotest_common.sh@316 -- # df -T 00:15:21.961 13:58:01 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=devtmpfs 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=4194304 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=4194304 00:15:21.961 13:58:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:15:21.961 13:58:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=6265274368 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267887616 00:15:21.961 13:58:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:15:21.961 13:58:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=2494353408 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=2507157504 00:15:21.961 13:58:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=12804096 00:15:21.961 13:58:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=13775933440 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:15:21.961 13:58:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=5249249280 00:15:21.961 13:58:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=13775933440 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:15:21.961 13:58:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=5249249280 00:15:21.961 13:58:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda2 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=843546624 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1012768768 00:15:21.961 13:58:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=100016128 00:15:21.961 13:58:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda3 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=92499968 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=104607744 00:15:21.961 13:58:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=12107776 00:15:21.961 13:58:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=6267748352 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267887616 00:15:21.961 13:58:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=139264 00:15:21.961 13:58:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=1253572608 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253576704 00:15:21.961 13:58:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:15:21.961 13:58:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output 00:15:21.961 13:58:01 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # avails["$mount"]=92356468736 00:15:21.961 13:58:01 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:15:21.961 13:58:01 -- common/autotest_common.sh@352 -- # uses["$mount"]=7346311168 00:15:21.961 13:58:01 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:15:21.961 13:58:01 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:15:21.961 * Looking for test storage... 00:15:21.961 13:58:01 -- common/autotest_common.sh@357 -- # local target_space new_size 00:15:21.961 13:58:01 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:15:21.961 13:58:01 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:21.961 13:58:01 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:21.961 13:58:01 -- common/autotest_common.sh@361 -- # mount=/home 00:15:21.961 13:58:01 -- common/autotest_common.sh@363 -- # target_space=13775933440 00:15:21.961 13:58:01 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:15:21.961 13:58:01 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:15:21.961 13:58:01 -- common/autotest_common.sh@369 -- # [[ btrfs == tmpfs ]] 00:15:21.961 13:58:01 -- common/autotest_common.sh@369 -- # [[ btrfs == ramfs ]] 00:15:21.961 13:58:01 -- common/autotest_common.sh@369 -- # [[ /home == / ]] 00:15:21.961 13:58:01 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:21.961 13:58:01 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:21.961 13:58:01 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:21.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:21.961 13:58:01 -- common/autotest_common.sh@378 -- # return 0 00:15:21.961 13:58:01 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:15:21.961 13:58:01 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:15:21.961 13:58:01 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:21.961 13:58:01 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:21.961 13:58:01 -- common/autotest_common.sh@1673 -- # true 00:15:21.961 13:58:01 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:15:21.961 13:58:01 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:15:21.961 13:58:01 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:15:21.961 13:58:01 -- common/autotest_common.sh@27 -- # exec 00:15:21.961 13:58:01 -- common/autotest_common.sh@29 -- # exec 00:15:21.961 13:58:01 -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:21.961 13:58:01 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:21.961 13:58:01 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:21.961 13:58:01 -- common/autotest_common.sh@18 -- # set -x 00:15:21.961 13:58:01 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:21.961 13:58:01 -- nvmf/common.sh@7 -- # uname -s 00:15:21.961 13:58:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.961 13:58:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.961 13:58:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.961 13:58:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.961 13:58:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.961 13:58:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.961 13:58:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.961 13:58:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.961 13:58:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.961 13:58:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.961 13:58:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:15:21.961 13:58:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:15:21.961 13:58:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.961 13:58:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.961 13:58:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:21.961 13:58:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.961 13:58:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:21.961 13:58:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.961 13:58:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.961 13:58:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.962 13:58:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.962 13:58:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.962 13:58:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.962 13:58:01 -- paths/export.sh@5 -- # export PATH 00:15:21.962 13:58:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.962 13:58:01 -- nvmf/common.sh@47 -- # : 0 00:15:21.962 13:58:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.962 13:58:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.962 13:58:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.962 13:58:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.962 13:58:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.962 13:58:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.962 13:58:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.962 13:58:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.962 13:58:01 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:15:21.962 13:58:01 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:21.962 13:58:01 -- target/filesystem.sh@15 -- # nvmftestinit 00:15:21.962 13:58:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:21.962 13:58:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.962 13:58:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:21.962 13:58:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:21.962 13:58:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:21.962 13:58:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.962 13:58:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.962 13:58:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.220 13:58:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:22.220 13:58:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:22.220 13:58:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:22.220 13:58:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:22.220 13:58:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:22.220 13:58:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:22.220 13:58:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.220 13:58:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.220 13:58:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:22.220 13:58:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:22.220 13:58:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:22.220 13:58:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:22.220 13:58:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:22.220 13:58:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.220 13:58:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:22.220 13:58:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:22.220 13:58:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:22.220 13:58:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:22.220 13:58:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:22.220 13:58:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:22.220 Cannot find device "nvmf_tgt_br" 00:15:22.220 13:58:01 -- nvmf/common.sh@155 -- # true 00:15:22.220 13:58:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:22.220 Cannot find device "nvmf_tgt_br2" 00:15:22.220 13:58:01 -- nvmf/common.sh@156 -- # true 00:15:22.220 13:58:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:22.220 13:58:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:22.220 Cannot find device "nvmf_tgt_br" 00:15:22.220 13:58:01 -- nvmf/common.sh@158 -- # true 00:15:22.220 13:58:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:22.220 Cannot find device "nvmf_tgt_br2" 00:15:22.220 13:58:01 -- nvmf/common.sh@159 -- # true 00:15:22.220 13:58:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:22.220 13:58:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:22.220 13:58:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:22.220 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:22.220 13:58:01 -- nvmf/common.sh@162 -- # true 00:15:22.220 13:58:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:22.220 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:22.220 13:58:01 -- nvmf/common.sh@163 -- # true 00:15:22.221 13:58:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:22.221 13:58:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:22.221 13:58:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:22.221 13:58:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:22.221 13:58:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:22.479 13:58:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:22.479 13:58:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:22.479 13:58:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:22.479 13:58:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:22.479 13:58:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:22.479 13:58:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:22.479 13:58:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:22.479 13:58:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:22.479 13:58:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:22.479 13:58:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:22.479 13:58:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:22.479 13:58:02 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:22.479 13:58:02 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:22.479 13:58:02 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:22.479 13:58:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:22.479 13:58:02 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:22.479 13:58:02 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:22.479 13:58:02 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:22.479 13:58:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:22.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:15:22.479 00:15:22.479 --- 10.0.0.2 ping statistics --- 00:15:22.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.479 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:15:22.479 13:58:02 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:22.479 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:22.479 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:15:22.479 00:15:22.479 --- 10.0.0.3 ping statistics --- 00:15:22.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.479 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:22.479 13:58:02 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:22.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:22.479 00:15:22.479 --- 10.0.0.1 ping statistics --- 00:15:22.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.479 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:22.479 13:58:02 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.479 13:58:02 -- nvmf/common.sh@422 -- # return 0 00:15:22.479 13:58:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:22.479 13:58:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.479 13:58:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:22.479 13:58:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:22.479 13:58:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.479 13:58:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:22.479 13:58:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:22.479 13:58:02 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:15:22.479 13:58:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:22.479 13:58:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:22.479 13:58:02 -- common/autotest_common.sh@10 -- # set +x 00:15:22.738 ************************************ 00:15:22.738 START TEST nvmf_filesystem_no_in_capsule 00:15:22.738 ************************************ 00:15:22.738 13:58:02 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:15:22.738 13:58:02 -- target/filesystem.sh@47 -- # in_capsule=0 00:15:22.738 13:58:02 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:22.738 13:58:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:22.738 13:58:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:22.738 13:58:02 -- common/autotest_common.sh@10 -- # set +x 00:15:22.738 13:58:02 -- nvmf/common.sh@470 -- # nvmfpid=66479 00:15:22.738 13:58:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:22.738 13:58:02 -- nvmf/common.sh@471 -- # waitforlisten 66479 00:15:22.738 13:58:02 -- common/autotest_common.sh@817 -- # '[' -z 66479 ']' 00:15:22.738 13:58:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.738 13:58:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:22.738 13:58:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.738 13:58:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:22.738 13:58:02 -- common/autotest_common.sh@10 -- # set +x 00:15:22.738 [2024-04-26 13:58:02.326374] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:22.738 [2024-04-26 13:58:02.326508] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.027 [2024-04-26 13:58:02.506858] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:23.286 [2024-04-26 13:58:02.771565] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.286 [2024-04-26 13:58:02.771623] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.286 [2024-04-26 13:58:02.771640] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.286 [2024-04-26 13:58:02.771652] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.286 [2024-04-26 13:58:02.771665] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.286 [2024-04-26 13:58:02.771836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.286 [2024-04-26 13:58:02.772126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.286 [2024-04-26 13:58:02.772732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.286 [2024-04-26 13:58:02.772776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.853 13:58:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:23.853 13:58:03 -- common/autotest_common.sh@850 -- # return 0 00:15:23.853 13:58:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:23.853 13:58:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:23.853 13:58:03 -- common/autotest_common.sh@10 -- # set +x 00:15:23.853 13:58:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.853 13:58:03 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:23.853 13:58:03 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:23.853 13:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:23.853 13:58:03 -- common/autotest_common.sh@10 -- # set +x 00:15:23.853 [2024-04-26 13:58:03.299243] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.853 13:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:23.853 13:58:03 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:23.853 13:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:23.853 13:58:03 -- common/autotest_common.sh@10 -- # set +x 00:15:24.420 Malloc1 00:15:24.420 13:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.420 13:58:04 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:24.420 13:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.420 13:58:04 -- common/autotest_common.sh@10 -- # set +x 00:15:24.420 13:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.420 13:58:04 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:24.420 13:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.420 13:58:04 -- common/autotest_common.sh@10 -- # set +x 00:15:24.420 13:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.420 13:58:04 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:24.420 13:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.420 13:58:04 -- common/autotest_common.sh@10 -- # set +x 00:15:24.420 [2024-04-26 13:58:04.059777] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.420 13:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.420 13:58:04 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:24.420 13:58:04 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:15:24.420 13:58:04 -- common/autotest_common.sh@1365 -- # local bdev_info 00:15:24.420 13:58:04 -- common/autotest_common.sh@1366 -- # local bs 00:15:24.421 13:58:04 -- common/autotest_common.sh@1367 -- # local nb 00:15:24.421 13:58:04 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:24.421 13:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:24.421 13:58:04 -- common/autotest_common.sh@10 -- # set +x 00:15:24.679 13:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:24.679 13:58:04 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:15:24.679 { 00:15:24.679 "aliases": [ 00:15:24.679 "451e5727-d09f-4e9c-b8c7-0250632450b7" 00:15:24.679 ], 00:15:24.679 "assigned_rate_limits": { 00:15:24.679 "r_mbytes_per_sec": 0, 00:15:24.679 "rw_ios_per_sec": 0, 00:15:24.679 "rw_mbytes_per_sec": 0, 00:15:24.679 "w_mbytes_per_sec": 0 00:15:24.679 }, 00:15:24.679 "block_size": 512, 00:15:24.679 "claim_type": "exclusive_write", 00:15:24.679 "claimed": true, 00:15:24.679 "driver_specific": {}, 00:15:24.679 "memory_domains": [ 00:15:24.679 { 00:15:24.679 "dma_device_id": "system", 00:15:24.679 "dma_device_type": 1 00:15:24.679 }, 00:15:24.679 { 00:15:24.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.679 "dma_device_type": 2 00:15:24.679 } 00:15:24.679 ], 00:15:24.679 "name": "Malloc1", 00:15:24.679 "num_blocks": 1048576, 00:15:24.679 "product_name": "Malloc disk", 00:15:24.679 "supported_io_types": { 00:15:24.679 "abort": true, 00:15:24.679 "compare": false, 00:15:24.679 "compare_and_write": false, 00:15:24.679 "flush": true, 00:15:24.679 "nvme_admin": false, 00:15:24.679 "nvme_io": false, 00:15:24.679 "read": true, 00:15:24.679 "reset": true, 00:15:24.679 "unmap": true, 00:15:24.679 "write": true, 00:15:24.679 "write_zeroes": true 00:15:24.679 }, 00:15:24.679 "uuid": "451e5727-d09f-4e9c-b8c7-0250632450b7", 00:15:24.679 "zoned": false 00:15:24.679 } 00:15:24.679 ]' 00:15:24.679 13:58:04 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:15:24.679 13:58:04 -- common/autotest_common.sh@1369 -- # bs=512 00:15:24.679 13:58:04 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:15:24.679 13:58:04 -- common/autotest_common.sh@1370 -- # nb=1048576 00:15:24.679 13:58:04 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:15:24.679 13:58:04 -- common/autotest_common.sh@1374 -- # echo 512 00:15:24.679 13:58:04 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:24.679 13:58:04 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:24.937 13:58:04 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:24.937 13:58:04 -- common/autotest_common.sh@1184 -- # local i=0 00:15:24.937 13:58:04 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:24.937 13:58:04 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:24.937 13:58:04 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:26.844 13:58:06 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:26.844 13:58:06 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:26.844 13:58:06 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:26.844 13:58:06 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:26.844 13:58:06 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:26.844 13:58:06 -- common/autotest_common.sh@1194 -- # return 0 00:15:26.844 13:58:06 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:26.844 13:58:06 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:26.844 13:58:06 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:26.844 13:58:06 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:26.844 13:58:06 -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:26.844 13:58:06 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:26.844 13:58:06 -- setup/common.sh@80 -- # echo 536870912 00:15:26.844 13:58:06 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:26.844 13:58:06 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:26.844 13:58:06 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:26.844 13:58:06 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:27.103 13:58:06 -- target/filesystem.sh@69 -- # partprobe 00:15:27.103 13:58:06 -- target/filesystem.sh@70 -- # sleep 1 00:15:28.039 13:58:07 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:15:28.039 13:58:07 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:28.039 13:58:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:28.039 13:58:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:28.039 13:58:07 -- common/autotest_common.sh@10 -- # set +x 00:15:28.039 ************************************ 00:15:28.039 START TEST filesystem_ext4 00:15:28.039 ************************************ 00:15:28.039 13:58:07 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:28.039 13:58:07 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:28.039 13:58:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:28.039 13:58:07 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:28.039 13:58:07 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:15:28.039 13:58:07 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:15:28.039 13:58:07 -- common/autotest_common.sh@914 -- # local i=0 00:15:28.039 13:58:07 -- common/autotest_common.sh@915 -- # local force 00:15:28.039 13:58:07 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:15:28.039 13:58:07 -- common/autotest_common.sh@918 -- # force=-F 00:15:28.039 13:58:07 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:28.039 mke2fs 1.46.5 (30-Dec-2021) 00:15:28.298 Discarding device blocks: 0/522240 done 00:15:28.298 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:28.298 Filesystem UUID: 37fee0e1-b346-44ec-a9ed-b403235aa7ae 00:15:28.298 Superblock backups stored on blocks: 00:15:28.298 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:28.298 00:15:28.298 Allocating group tables: 0/64 done 00:15:28.298 Writing inode tables: 0/64 done 00:15:28.298 Creating journal (8192 blocks): done 00:15:28.298 Writing superblocks and filesystem accounting information: 0/64 done 00:15:28.298 00:15:28.298 13:58:07 -- common/autotest_common.sh@931 -- # return 0 00:15:28.298 13:58:07 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:28.558 13:58:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:28.558 13:58:08 -- target/filesystem.sh@25 -- # sync 00:15:28.558 13:58:08 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:28.558 13:58:08 -- target/filesystem.sh@27 -- # sync 00:15:28.558 13:58:08 -- target/filesystem.sh@29 -- # i=0 00:15:28.558 13:58:08 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:28.558 13:58:08 -- target/filesystem.sh@37 -- # kill -0 66479 00:15:28.558 13:58:08 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:28.558 13:58:08 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:28.558 13:58:08 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:28.558 13:58:08 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:28.558 ************************************ 00:15:28.558 END TEST filesystem_ext4 00:15:28.558 ************************************ 00:15:28.558 00:15:28.558 real 0m0.457s 00:15:28.558 user 0m0.041s 00:15:28.558 sys 0m0.073s 00:15:28.558 13:58:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:28.558 13:58:08 -- common/autotest_common.sh@10 -- # set +x 00:15:28.558 13:58:08 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:28.558 13:58:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:28.558 13:58:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:28.558 13:58:08 -- common/autotest_common.sh@10 -- # set +x 00:15:28.822 ************************************ 00:15:28.822 START TEST filesystem_btrfs 00:15:28.822 ************************************ 00:15:28.822 13:58:08 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:28.822 13:58:08 -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:28.822 13:58:08 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:28.822 13:58:08 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:28.822 13:58:08 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:15:28.822 13:58:08 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:15:28.822 13:58:08 -- common/autotest_common.sh@914 -- # local i=0 00:15:28.822 13:58:08 -- common/autotest_common.sh@915 -- # local force 00:15:28.822 13:58:08 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:15:28.822 13:58:08 -- common/autotest_common.sh@920 -- # force=-f 00:15:28.822 13:58:08 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:28.822 btrfs-progs v6.6.2 00:15:28.822 See https://btrfs.readthedocs.io for more information. 00:15:28.822 00:15:28.822 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:28.822 NOTE: several default settings have changed in version 5.15, please make sure 00:15:28.822 this does not affect your deployments: 00:15:28.822 - DUP for metadata (-m dup) 00:15:28.822 - enabled no-holes (-O no-holes) 00:15:28.822 - enabled free-space-tree (-R free-space-tree) 00:15:28.822 00:15:28.822 Label: (null) 00:15:28.822 UUID: 30c779da-4f2d-4f64-9274-eadddc192e93 00:15:28.822 Node size: 16384 00:15:28.822 Sector size: 4096 00:15:28.822 Filesystem size: 510.00MiB 00:15:28.822 Block group profiles: 00:15:28.822 Data: single 8.00MiB 00:15:28.822 Metadata: DUP 32.00MiB 00:15:28.822 System: DUP 8.00MiB 00:15:28.822 SSD detected: yes 00:15:28.822 Zoned device: no 00:15:28.822 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:15:28.822 Runtime features: free-space-tree 00:15:28.822 Checksum: crc32c 00:15:28.822 Number of devices: 1 00:15:28.822 Devices: 00:15:28.822 ID SIZE PATH 00:15:28.822 1 510.00MiB /dev/nvme0n1p1 00:15:28.822 00:15:28.822 13:58:08 -- common/autotest_common.sh@931 -- # return 0 00:15:28.822 13:58:08 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:29.081 13:58:08 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:29.081 13:58:08 -- target/filesystem.sh@25 -- # sync 00:15:29.081 13:58:08 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:29.081 13:58:08 -- target/filesystem.sh@27 -- # sync 00:15:29.081 13:58:08 -- target/filesystem.sh@29 -- # i=0 00:15:29.081 13:58:08 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:29.081 13:58:08 -- target/filesystem.sh@37 -- # kill -0 66479 00:15:29.081 13:58:08 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:29.081 13:58:08 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:29.081 13:58:08 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:29.081 13:58:08 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:29.081 ************************************ 00:15:29.081 END TEST filesystem_btrfs 00:15:29.081 ************************************ 00:15:29.081 00:15:29.081 real 0m0.283s 00:15:29.081 user 0m0.038s 00:15:29.081 sys 0m0.101s 00:15:29.081 13:58:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:29.081 13:58:08 -- common/autotest_common.sh@10 -- # set +x 00:15:29.081 13:58:08 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:15:29.081 13:58:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:29.081 13:58:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:29.081 13:58:08 -- common/autotest_common.sh@10 -- # set +x 00:15:29.081 ************************************ 00:15:29.081 START TEST filesystem_xfs 00:15:29.081 ************************************ 00:15:29.081 13:58:08 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:15:29.081 13:58:08 -- target/filesystem.sh@18 -- # fstype=xfs 00:15:29.081 13:58:08 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:29.081 13:58:08 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:29.081 13:58:08 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:15:29.081 13:58:08 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:15:29.081 13:58:08 -- common/autotest_common.sh@914 -- # local i=0 00:15:29.081 13:58:08 -- common/autotest_common.sh@915 -- # local force 00:15:29.081 13:58:08 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:15:29.081 13:58:08 -- common/autotest_common.sh@920 -- # force=-f 00:15:29.081 13:58:08 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:29.341 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:29.341 = sectsz=512 attr=2, projid32bit=1 00:15:29.341 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:29.341 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:29.341 data = bsize=4096 blocks=130560, imaxpct=25 00:15:29.341 = sunit=0 swidth=0 blks 00:15:29.341 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:29.341 log =internal log bsize=4096 blocks=16384, version=2 00:15:29.341 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:29.341 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:29.911 Discarding blocks...Done. 00:15:29.911 13:58:09 -- common/autotest_common.sh@931 -- # return 0 00:15:29.911 13:58:09 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:32.449 13:58:11 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:32.449 13:58:11 -- target/filesystem.sh@25 -- # sync 00:15:32.449 13:58:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:32.449 13:58:11 -- target/filesystem.sh@27 -- # sync 00:15:32.449 13:58:11 -- target/filesystem.sh@29 -- # i=0 00:15:32.449 13:58:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:32.449 13:58:11 -- target/filesystem.sh@37 -- # kill -0 66479 00:15:32.449 13:58:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:32.449 13:58:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:32.449 13:58:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:32.449 13:58:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:32.449 ************************************ 00:15:32.449 END TEST filesystem_xfs 00:15:32.449 ************************************ 00:15:32.449 00:15:32.449 real 0m3.088s 00:15:32.449 user 0m0.029s 00:15:32.449 sys 0m0.092s 00:15:32.449 13:58:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:32.449 13:58:11 -- common/autotest_common.sh@10 -- # set +x 00:15:32.449 13:58:11 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:32.449 13:58:11 -- target/filesystem.sh@93 -- # sync 00:15:32.449 13:58:11 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:32.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.449 13:58:12 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:32.449 13:58:12 -- common/autotest_common.sh@1205 -- # local i=0 00:15:32.449 13:58:12 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:32.449 13:58:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.449 13:58:12 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:32.449 13:58:12 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:32.449 13:58:12 -- common/autotest_common.sh@1217 -- # return 0 00:15:32.449 13:58:12 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:32.449 13:58:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:32.449 13:58:12 -- common/autotest_common.sh@10 -- # set +x 00:15:32.449 13:58:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:32.449 13:58:12 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:32.449 13:58:12 -- target/filesystem.sh@101 -- # killprocess 66479 00:15:32.449 13:58:12 -- common/autotest_common.sh@936 -- # '[' -z 66479 ']' 00:15:32.449 13:58:12 -- common/autotest_common.sh@940 -- # kill -0 66479 00:15:32.449 13:58:12 -- common/autotest_common.sh@941 -- # uname 00:15:32.449 13:58:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:32.449 13:58:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66479 00:15:32.449 killing process with pid 66479 00:15:32.449 13:58:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:32.449 13:58:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:32.449 13:58:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66479' 00:15:32.449 13:58:12 -- common/autotest_common.sh@955 -- # kill 66479 00:15:32.449 13:58:12 -- common/autotest_common.sh@960 -- # wait 66479 00:15:35.874 13:58:14 -- target/filesystem.sh@102 -- # nvmfpid= 00:15:35.874 00:15:35.874 real 0m12.786s 00:15:35.874 user 0m46.539s 00:15:35.874 sys 0m2.476s 00:15:35.874 13:58:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:35.874 ************************************ 00:15:35.874 END TEST nvmf_filesystem_no_in_capsule 00:15:35.874 ************************************ 00:15:35.874 13:58:14 -- common/autotest_common.sh@10 -- # set +x 00:15:35.874 13:58:15 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:15:35.874 13:58:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:35.874 13:58:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:35.874 13:58:15 -- common/autotest_common.sh@10 -- # set +x 00:15:35.874 ************************************ 00:15:35.874 START TEST nvmf_filesystem_in_capsule 00:15:35.874 ************************************ 00:15:35.874 13:58:15 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:15:35.874 13:58:15 -- target/filesystem.sh@47 -- # in_capsule=4096 00:15:35.874 13:58:15 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:35.874 13:58:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:35.874 13:58:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:35.874 13:58:15 -- common/autotest_common.sh@10 -- # set +x 00:15:35.874 13:58:15 -- nvmf/common.sh@470 -- # nvmfpid=66852 00:15:35.874 13:58:15 -- nvmf/common.sh@471 -- # waitforlisten 66852 00:15:35.874 13:58:15 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:35.874 13:58:15 -- common/autotest_common.sh@817 -- # '[' -z 66852 ']' 00:15:35.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.874 13:58:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.874 13:58:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:35.874 13:58:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.874 13:58:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:35.874 13:58:15 -- common/autotest_common.sh@10 -- # set +x 00:15:35.874 [2024-04-26 13:58:15.272179] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:35.874 [2024-04-26 13:58:15.272774] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.874 [2024-04-26 13:58:15.449423] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.132 [2024-04-26 13:58:15.705977] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.132 [2024-04-26 13:58:15.706026] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.132 [2024-04-26 13:58:15.706043] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.132 [2024-04-26 13:58:15.706055] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.132 [2024-04-26 13:58:15.706068] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.132 [2024-04-26 13:58:15.706262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.132 [2024-04-26 13:58:15.706402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.132 [2024-04-26 13:58:15.707106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.132 [2024-04-26 13:58:15.707140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.700 13:58:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:36.700 13:58:16 -- common/autotest_common.sh@850 -- # return 0 00:15:36.700 13:58:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:36.700 13:58:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:36.700 13:58:16 -- common/autotest_common.sh@10 -- # set +x 00:15:36.700 13:58:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.700 13:58:16 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:36.700 13:58:16 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:15:36.700 13:58:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.700 13:58:16 -- common/autotest_common.sh@10 -- # set +x 00:15:36.700 [2024-04-26 13:58:16.269277] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.700 13:58:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.700 13:58:16 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:36.700 13:58:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.700 13:58:16 -- common/autotest_common.sh@10 -- # set +x 00:15:37.636 Malloc1 00:15:37.636 13:58:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.636 13:58:16 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:37.637 13:58:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.637 13:58:16 -- common/autotest_common.sh@10 -- # set +x 00:15:37.637 13:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.637 13:58:17 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:37.637 13:58:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.637 13:58:17 -- common/autotest_common.sh@10 -- # set +x 00:15:37.637 13:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.637 13:58:17 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:37.637 13:58:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.637 13:58:17 -- common/autotest_common.sh@10 -- # set +x 00:15:37.637 [2024-04-26 13:58:17.021387] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:37.637 13:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.637 13:58:17 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:37.637 13:58:17 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:15:37.637 13:58:17 -- common/autotest_common.sh@1365 -- # local bdev_info 00:15:37.637 13:58:17 -- common/autotest_common.sh@1366 -- # local bs 00:15:37.637 13:58:17 -- common/autotest_common.sh@1367 -- # local nb 00:15:37.637 13:58:17 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:37.637 13:58:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:37.637 13:58:17 -- common/autotest_common.sh@10 -- # set +x 00:15:37.637 13:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:37.637 13:58:17 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:15:37.637 { 00:15:37.637 "aliases": [ 00:15:37.637 "9e12bc2a-bb03-4361-afb6-0bb32c7dd643" 00:15:37.637 ], 00:15:37.637 "assigned_rate_limits": { 00:15:37.637 "r_mbytes_per_sec": 0, 00:15:37.637 "rw_ios_per_sec": 0, 00:15:37.637 "rw_mbytes_per_sec": 0, 00:15:37.637 "w_mbytes_per_sec": 0 00:15:37.637 }, 00:15:37.637 "block_size": 512, 00:15:37.637 "claim_type": "exclusive_write", 00:15:37.637 "claimed": true, 00:15:37.637 "driver_specific": {}, 00:15:37.637 "memory_domains": [ 00:15:37.637 { 00:15:37.637 "dma_device_id": "system", 00:15:37.637 "dma_device_type": 1 00:15:37.637 }, 00:15:37.637 { 00:15:37.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.637 "dma_device_type": 2 00:15:37.637 } 00:15:37.637 ], 00:15:37.637 "name": "Malloc1", 00:15:37.637 "num_blocks": 1048576, 00:15:37.637 "product_name": "Malloc disk", 00:15:37.637 "supported_io_types": { 00:15:37.637 "abort": true, 00:15:37.637 "compare": false, 00:15:37.637 "compare_and_write": false, 00:15:37.637 "flush": true, 00:15:37.637 "nvme_admin": false, 00:15:37.637 "nvme_io": false, 00:15:37.637 "read": true, 00:15:37.637 "reset": true, 00:15:37.637 "unmap": true, 00:15:37.637 "write": true, 00:15:37.637 "write_zeroes": true 00:15:37.637 }, 00:15:37.637 "uuid": "9e12bc2a-bb03-4361-afb6-0bb32c7dd643", 00:15:37.637 "zoned": false 00:15:37.637 } 00:15:37.637 ]' 00:15:37.637 13:58:17 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:15:37.637 13:58:17 -- common/autotest_common.sh@1369 -- # bs=512 00:15:37.637 13:58:17 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:15:37.637 13:58:17 -- common/autotest_common.sh@1370 -- # nb=1048576 00:15:37.637 13:58:17 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:15:37.637 13:58:17 -- common/autotest_common.sh@1374 -- # echo 512 00:15:37.637 13:58:17 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:37.637 13:58:17 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:37.896 13:58:17 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:37.896 13:58:17 -- common/autotest_common.sh@1184 -- # local i=0 00:15:37.896 13:58:17 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:37.896 13:58:17 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:15:37.896 13:58:17 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:39.805 13:58:19 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:39.805 13:58:19 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:39.805 13:58:19 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:39.805 13:58:19 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:15:39.805 13:58:19 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:39.805 13:58:19 -- common/autotest_common.sh@1194 -- # return 0 00:15:39.805 13:58:19 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:39.805 13:58:19 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:39.805 13:58:19 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:39.805 13:58:19 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:39.805 13:58:19 -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:39.805 13:58:19 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:39.805 13:58:19 -- setup/common.sh@80 -- # echo 536870912 00:15:39.805 13:58:19 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:39.805 13:58:19 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:39.805 13:58:19 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:39.805 13:58:19 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:39.805 13:58:19 -- target/filesystem.sh@69 -- # partprobe 00:15:40.064 13:58:19 -- target/filesystem.sh@70 -- # sleep 1 00:15:41.000 13:58:20 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:15:41.000 13:58:20 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:15:41.000 13:58:20 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:41.000 13:58:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:41.000 13:58:20 -- common/autotest_common.sh@10 -- # set +x 00:15:41.000 ************************************ 00:15:41.000 START TEST filesystem_in_capsule_ext4 00:15:41.000 ************************************ 00:15:41.000 13:58:20 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:15:41.000 13:58:20 -- target/filesystem.sh@18 -- # fstype=ext4 00:15:41.000 13:58:20 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:41.000 13:58:20 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:15:41.000 13:58:20 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:15:41.000 13:58:20 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:15:41.000 13:58:20 -- common/autotest_common.sh@914 -- # local i=0 00:15:41.000 13:58:20 -- common/autotest_common.sh@915 -- # local force 00:15:41.000 13:58:20 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:15:41.000 13:58:20 -- common/autotest_common.sh@918 -- # force=-F 00:15:41.000 13:58:20 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:15:41.000 mke2fs 1.46.5 (30-Dec-2021) 00:15:41.260 Discarding device blocks: 0/522240 done 00:15:41.260 Creating filesystem with 522240 1k blocks and 130560 inodes 00:15:41.260 Filesystem UUID: 6be4ff28-2caa-4905-b22c-9942b6158ba8 00:15:41.260 Superblock backups stored on blocks: 00:15:41.260 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:15:41.260 00:15:41.260 Allocating group tables: 0/64 done 00:15:41.260 Writing inode tables: 0/64 done 00:15:41.260 Creating journal (8192 blocks): done 00:15:41.260 Writing superblocks and filesystem accounting information: 0/64 done 00:15:41.260 00:15:41.260 13:58:20 -- common/autotest_common.sh@931 -- # return 0 00:15:41.260 13:58:20 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:41.260 13:58:20 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:41.260 13:58:20 -- target/filesystem.sh@25 -- # sync 00:15:41.260 13:58:20 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:41.260 13:58:20 -- target/filesystem.sh@27 -- # sync 00:15:41.260 13:58:20 -- target/filesystem.sh@29 -- # i=0 00:15:41.260 13:58:20 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:41.520 13:58:20 -- target/filesystem.sh@37 -- # kill -0 66852 00:15:41.520 13:58:20 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:41.520 13:58:20 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:41.520 13:58:20 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:41.520 13:58:20 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:41.520 ************************************ 00:15:41.520 END TEST filesystem_in_capsule_ext4 00:15:41.520 ************************************ 00:15:41.520 00:15:41.520 real 0m0.379s 00:15:41.520 user 0m0.035s 00:15:41.520 sys 0m0.081s 00:15:41.520 13:58:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:41.520 13:58:20 -- common/autotest_common.sh@10 -- # set +x 00:15:41.520 13:58:21 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:15:41.520 13:58:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:41.520 13:58:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:41.520 13:58:21 -- common/autotest_common.sh@10 -- # set +x 00:15:41.520 ************************************ 00:15:41.520 START TEST filesystem_in_capsule_btrfs 00:15:41.520 ************************************ 00:15:41.520 13:58:21 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:15:41.520 13:58:21 -- target/filesystem.sh@18 -- # fstype=btrfs 00:15:41.520 13:58:21 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:41.520 13:58:21 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:15:41.520 13:58:21 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:15:41.520 13:58:21 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:15:41.520 13:58:21 -- common/autotest_common.sh@914 -- # local i=0 00:15:41.520 13:58:21 -- common/autotest_common.sh@915 -- # local force 00:15:41.520 13:58:21 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:15:41.520 13:58:21 -- common/autotest_common.sh@920 -- # force=-f 00:15:41.520 13:58:21 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:15:41.780 btrfs-progs v6.6.2 00:15:41.780 See https://btrfs.readthedocs.io for more information. 00:15:41.780 00:15:41.780 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:15:41.780 NOTE: several default settings have changed in version 5.15, please make sure 00:15:41.780 this does not affect your deployments: 00:15:41.780 - DUP for metadata (-m dup) 00:15:41.780 - enabled no-holes (-O no-holes) 00:15:41.780 - enabled free-space-tree (-R free-space-tree) 00:15:41.780 00:15:41.780 Label: (null) 00:15:41.780 UUID: ed18bb3d-5eb3-4ed8-acb6-e0e43c80e3a5 00:15:41.780 Node size: 16384 00:15:41.780 Sector size: 4096 00:15:41.780 Filesystem size: 510.00MiB 00:15:41.780 Block group profiles: 00:15:41.780 Data: single 8.00MiB 00:15:41.780 Metadata: DUP 32.00MiB 00:15:41.780 System: DUP 8.00MiB 00:15:41.780 SSD detected: yes 00:15:41.780 Zoned device: no 00:15:41.780 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:15:41.780 Runtime features: free-space-tree 00:15:41.780 Checksum: crc32c 00:15:41.780 Number of devices: 1 00:15:41.780 Devices: 00:15:41.780 ID SIZE PATH 00:15:41.780 1 510.00MiB /dev/nvme0n1p1 00:15:41.780 00:15:41.780 13:58:21 -- common/autotest_common.sh@931 -- # return 0 00:15:41.780 13:58:21 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:41.780 13:58:21 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:41.780 13:58:21 -- target/filesystem.sh@25 -- # sync 00:15:41.780 13:58:21 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:41.780 13:58:21 -- target/filesystem.sh@27 -- # sync 00:15:41.780 13:58:21 -- target/filesystem.sh@29 -- # i=0 00:15:41.780 13:58:21 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:41.780 13:58:21 -- target/filesystem.sh@37 -- # kill -0 66852 00:15:41.780 13:58:21 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:41.780 13:58:21 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:42.039 13:58:21 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:42.039 13:58:21 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:42.039 ************************************ 00:15:42.039 END TEST filesystem_in_capsule_btrfs 00:15:42.039 ************************************ 00:15:42.039 00:15:42.039 real 0m0.335s 00:15:42.039 user 0m0.030s 00:15:42.039 sys 0m0.110s 00:15:42.039 13:58:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:42.039 13:58:21 -- common/autotest_common.sh@10 -- # set +x 00:15:42.039 13:58:21 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:15:42.039 13:58:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:42.039 13:58:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:42.039 13:58:21 -- common/autotest_common.sh@10 -- # set +x 00:15:42.039 ************************************ 00:15:42.039 START TEST filesystem_in_capsule_xfs 00:15:42.039 ************************************ 00:15:42.039 13:58:21 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:15:42.039 13:58:21 -- target/filesystem.sh@18 -- # fstype=xfs 00:15:42.039 13:58:21 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:15:42.039 13:58:21 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:15:42.039 13:58:21 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:15:42.039 13:58:21 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:15:42.039 13:58:21 -- common/autotest_common.sh@914 -- # local i=0 00:15:42.039 13:58:21 -- common/autotest_common.sh@915 -- # local force 00:15:42.039 13:58:21 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:15:42.039 13:58:21 -- common/autotest_common.sh@920 -- # force=-f 00:15:42.039 13:58:21 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:15:42.298 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:15:42.298 = sectsz=512 attr=2, projid32bit=1 00:15:42.298 = crc=1 finobt=1, sparse=1, rmapbt=0 00:15:42.298 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:15:42.298 data = bsize=4096 blocks=130560, imaxpct=25 00:15:42.298 = sunit=0 swidth=0 blks 00:15:42.298 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:15:42.298 log =internal log bsize=4096 blocks=16384, version=2 00:15:42.298 = sectsz=512 sunit=0 blks, lazy-count=1 00:15:42.298 realtime =none extsz=4096 blocks=0, rtextents=0 00:15:42.867 Discarding blocks...Done. 00:15:42.867 13:58:22 -- common/autotest_common.sh@931 -- # return 0 00:15:42.867 13:58:22 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:15:44.782 13:58:24 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:15:44.782 13:58:24 -- target/filesystem.sh@25 -- # sync 00:15:44.782 13:58:24 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:15:44.782 13:58:24 -- target/filesystem.sh@27 -- # sync 00:15:44.782 13:58:24 -- target/filesystem.sh@29 -- # i=0 00:15:44.782 13:58:24 -- target/filesystem.sh@30 -- # umount /mnt/device 00:15:44.782 13:58:24 -- target/filesystem.sh@37 -- # kill -0 66852 00:15:44.782 13:58:24 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:15:44.782 13:58:24 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:15:44.782 13:58:24 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:15:44.782 13:58:24 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:15:44.782 ************************************ 00:15:44.782 END TEST filesystem_in_capsule_xfs 00:15:44.782 ************************************ 00:15:44.782 00:15:44.782 real 0m2.682s 00:15:44.782 user 0m0.032s 00:15:44.782 sys 0m0.086s 00:15:44.782 13:58:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:44.782 13:58:24 -- common/autotest_common.sh@10 -- # set +x 00:15:44.782 13:58:24 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:15:44.782 13:58:24 -- target/filesystem.sh@93 -- # sync 00:15:44.782 13:58:24 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:45.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.041 13:58:24 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:45.041 13:58:24 -- common/autotest_common.sh@1205 -- # local i=0 00:15:45.041 13:58:24 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:15:45.041 13:58:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:45.041 13:58:24 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:15:45.041 13:58:24 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:45.041 13:58:24 -- common/autotest_common.sh@1217 -- # return 0 00:15:45.041 13:58:24 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.041 13:58:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:45.041 13:58:24 -- common/autotest_common.sh@10 -- # set +x 00:15:45.041 13:58:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:45.041 13:58:24 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:45.041 13:58:24 -- target/filesystem.sh@101 -- # killprocess 66852 00:15:45.041 13:58:24 -- common/autotest_common.sh@936 -- # '[' -z 66852 ']' 00:15:45.041 13:58:24 -- common/autotest_common.sh@940 -- # kill -0 66852 00:15:45.041 13:58:24 -- common/autotest_common.sh@941 -- # uname 00:15:45.041 13:58:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:45.041 13:58:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66852 00:15:45.041 killing process with pid 66852 00:15:45.041 13:58:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:45.041 13:58:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:45.041 13:58:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66852' 00:15:45.041 13:58:24 -- common/autotest_common.sh@955 -- # kill 66852 00:15:45.041 13:58:24 -- common/autotest_common.sh@960 -- # wait 66852 00:15:48.327 ************************************ 00:15:48.327 END TEST nvmf_filesystem_in_capsule 00:15:48.327 ************************************ 00:15:48.327 13:58:27 -- target/filesystem.sh@102 -- # nvmfpid= 00:15:48.327 00:15:48.327 real 0m12.279s 00:15:48.327 user 0m44.732s 00:15:48.327 sys 0m2.469s 00:15:48.327 13:58:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:48.327 13:58:27 -- common/autotest_common.sh@10 -- # set +x 00:15:48.327 13:58:27 -- target/filesystem.sh@108 -- # nvmftestfini 00:15:48.327 13:58:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:48.327 13:58:27 -- nvmf/common.sh@117 -- # sync 00:15:48.327 13:58:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:48.327 13:58:27 -- nvmf/common.sh@120 -- # set +e 00:15:48.327 13:58:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:48.327 13:58:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:48.327 rmmod nvme_tcp 00:15:48.327 rmmod nvme_fabrics 00:15:48.327 rmmod nvme_keyring 00:15:48.327 13:58:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:48.327 13:58:27 -- nvmf/common.sh@124 -- # set -e 00:15:48.327 13:58:27 -- nvmf/common.sh@125 -- # return 0 00:15:48.327 13:58:27 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:48.327 13:58:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:48.327 13:58:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:48.327 13:58:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:48.327 13:58:27 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.327 13:58:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:48.327 13:58:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.327 13:58:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.327 13:58:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.327 13:58:27 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:48.327 ************************************ 00:15:48.327 END TEST nvmf_filesystem 00:15:48.327 ************************************ 00:15:48.327 00:15:48.327 real 0m26.387s 00:15:48.327 user 1m31.669s 00:15:48.327 sys 0m5.604s 00:15:48.327 13:58:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:48.327 13:58:27 -- common/autotest_common.sh@10 -- # set +x 00:15:48.327 13:58:27 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:48.327 13:58:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:48.327 13:58:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:48.327 13:58:27 -- common/autotest_common.sh@10 -- # set +x 00:15:48.327 ************************************ 00:15:48.327 START TEST nvmf_discovery 00:15:48.327 ************************************ 00:15:48.327 13:58:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:15:48.327 * Looking for test storage... 00:15:48.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:48.327 13:58:27 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:48.327 13:58:27 -- nvmf/common.sh@7 -- # uname -s 00:15:48.327 13:58:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.327 13:58:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.327 13:58:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.327 13:58:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.327 13:58:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.327 13:58:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.327 13:58:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.327 13:58:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.327 13:58:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.327 13:58:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.327 13:58:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:15:48.327 13:58:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:15:48.327 13:58:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.327 13:58:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.327 13:58:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:48.327 13:58:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.327 13:58:27 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.327 13:58:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.327 13:58:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.327 13:58:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.327 13:58:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.327 13:58:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.327 13:58:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.327 13:58:27 -- paths/export.sh@5 -- # export PATH 00:15:48.327 13:58:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.327 13:58:27 -- nvmf/common.sh@47 -- # : 0 00:15:48.327 13:58:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:48.327 13:58:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:48.327 13:58:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.327 13:58:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.327 13:58:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.327 13:58:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:48.327 13:58:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:48.327 13:58:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:48.327 13:58:27 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:15:48.327 13:58:27 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:15:48.327 13:58:27 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:15:48.327 13:58:27 -- target/discovery.sh@15 -- # hash nvme 00:15:48.327 13:58:27 -- target/discovery.sh@20 -- # nvmftestinit 00:15:48.327 13:58:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:48.327 13:58:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.327 13:58:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:48.327 13:58:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:48.327 13:58:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:48.327 13:58:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.327 13:58:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.327 13:58:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.586 13:58:28 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:48.586 13:58:28 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:48.586 13:58:28 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:48.586 13:58:28 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:48.586 13:58:28 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:48.586 13:58:28 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:48.586 13:58:28 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.586 13:58:28 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.586 13:58:28 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:48.586 13:58:28 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:48.586 13:58:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:48.586 13:58:28 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:48.586 13:58:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:48.586 13:58:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.586 13:58:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:48.586 13:58:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:48.586 13:58:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:48.586 13:58:28 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:48.586 13:58:28 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:48.586 13:58:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:48.586 Cannot find device "nvmf_tgt_br" 00:15:48.586 13:58:28 -- nvmf/common.sh@155 -- # true 00:15:48.586 13:58:28 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.586 Cannot find device "nvmf_tgt_br2" 00:15:48.586 13:58:28 -- nvmf/common.sh@156 -- # true 00:15:48.586 13:58:28 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:48.586 13:58:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:48.586 Cannot find device "nvmf_tgt_br" 00:15:48.586 13:58:28 -- nvmf/common.sh@158 -- # true 00:15:48.586 13:58:28 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:48.586 Cannot find device "nvmf_tgt_br2" 00:15:48.586 13:58:28 -- nvmf/common.sh@159 -- # true 00:15:48.586 13:58:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:48.586 13:58:28 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:48.586 13:58:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.586 13:58:28 -- nvmf/common.sh@162 -- # true 00:15:48.586 13:58:28 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.586 13:58:28 -- nvmf/common.sh@163 -- # true 00:15:48.586 13:58:28 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:48.586 13:58:28 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:48.586 13:58:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:48.586 13:58:28 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:48.586 13:58:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:48.586 13:58:28 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:48.586 13:58:28 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:48.586 13:58:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:48.844 13:58:28 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:48.844 13:58:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:48.844 13:58:28 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:48.844 13:58:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:48.844 13:58:28 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:48.844 13:58:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:48.844 13:58:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:48.844 13:58:28 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:48.844 13:58:28 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:48.844 13:58:28 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:48.844 13:58:28 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:48.844 13:58:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:48.844 13:58:28 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:48.844 13:58:28 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:48.844 13:58:28 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:48.844 13:58:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:48.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:15:48.844 00:15:48.844 --- 10.0.0.2 ping statistics --- 00:15:48.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.844 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:15:48.844 13:58:28 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:48.844 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:48.844 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:15:48.844 00:15:48.844 --- 10.0.0.3 ping statistics --- 00:15:48.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.844 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:48.844 13:58:28 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:48.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:15:48.844 00:15:48.844 --- 10.0.0.1 ping statistics --- 00:15:48.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.844 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:48.844 13:58:28 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.844 13:58:28 -- nvmf/common.sh@422 -- # return 0 00:15:48.844 13:58:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:48.844 13:58:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.844 13:58:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:48.844 13:58:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:48.844 13:58:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.844 13:58:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:48.844 13:58:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:48.844 13:58:28 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:15:48.844 13:58:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:48.845 13:58:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:48.845 13:58:28 -- common/autotest_common.sh@10 -- # set +x 00:15:48.845 13:58:28 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:48.845 13:58:28 -- nvmf/common.sh@470 -- # nvmfpid=67392 00:15:48.845 13:58:28 -- nvmf/common.sh@471 -- # waitforlisten 67392 00:15:48.845 13:58:28 -- common/autotest_common.sh@817 -- # '[' -z 67392 ']' 00:15:48.845 13:58:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.845 13:58:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:48.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.845 13:58:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.845 13:58:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:48.845 13:58:28 -- common/autotest_common.sh@10 -- # set +x 00:15:49.102 [2024-04-26 13:58:28.568486] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:49.102 [2024-04-26 13:58:28.568607] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.102 [2024-04-26 13:58:28.745410] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:49.361 [2024-04-26 13:58:29.015950] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.361 [2024-04-26 13:58:29.016007] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.361 [2024-04-26 13:58:29.016024] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.361 [2024-04-26 13:58:29.016037] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.361 [2024-04-26 13:58:29.016050] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.361 [2024-04-26 13:58:29.016931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.361 [2024-04-26 13:58:29.017045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.361 [2024-04-26 13:58:29.017142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.361 [2024-04-26 13:58:29.017102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.957 13:58:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:49.957 13:58:29 -- common/autotest_common.sh@850 -- # return 0 00:15:49.957 13:58:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:49.957 13:58:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:49.957 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:49.957 13:58:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.957 13:58:29 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:49.957 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.957 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:49.957 [2024-04-26 13:58:29.534291] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.957 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.957 13:58:29 -- target/discovery.sh@26 -- # seq 1 4 00:15:49.957 13:58:29 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:49.957 13:58:29 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:15:49.957 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.957 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:49.957 Null1 00:15:49.958 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.958 13:58:29 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:49.958 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.958 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:49.958 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.958 13:58:29 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:15:49.958 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.958 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:49.958 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.958 13:58:29 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.958 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.958 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:49.958 [2024-04-26 13:58:29.613880] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.958 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:49.958 13:58:29 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:49.958 13:58:29 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:15:49.958 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:49.958 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.223 Null2 00:15:50.223 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.223 13:58:29 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:15:50.223 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.223 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.223 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.223 13:58:29 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:15:50.223 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.223 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.223 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.223 13:58:29 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:50.223 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.223 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.223 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.223 13:58:29 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:50.223 13:58:29 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:15:50.223 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.223 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.223 Null3 00:15:50.223 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.223 13:58:29 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:15:50.223 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.223 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.223 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.223 13:58:29 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:15:50.223 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.223 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.223 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.223 13:58:29 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:50.223 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.223 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.223 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.223 13:58:29 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:15:50.223 13:58:29 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:15:50.223 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.223 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.223 Null4 00:15:50.223 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.223 13:58:29 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:15:50.223 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.223 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.223 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.223 13:58:29 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:15:50.223 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.223 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.223 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.223 13:58:29 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:15:50.223 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.223 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.223 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.223 13:58:29 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:50.223 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.223 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.223 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.223 13:58:29 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:15:50.223 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.223 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.223 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.223 13:58:29 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -a 10.0.0.2 -s 4420 00:15:50.223 00:15:50.223 Discovery Log Number of Records 6, Generation counter 6 00:15:50.223 =====Discovery Log Entry 0====== 00:15:50.223 trtype: tcp 00:15:50.223 adrfam: ipv4 00:15:50.223 subtype: current discovery subsystem 00:15:50.223 treq: not required 00:15:50.223 portid: 0 00:15:50.223 trsvcid: 4420 00:15:50.223 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:50.223 traddr: 10.0.0.2 00:15:50.223 eflags: explicit discovery connections, duplicate discovery information 00:15:50.223 sectype: none 00:15:50.223 =====Discovery Log Entry 1====== 00:15:50.223 trtype: tcp 00:15:50.223 adrfam: ipv4 00:15:50.223 subtype: nvme subsystem 00:15:50.223 treq: not required 00:15:50.223 portid: 0 00:15:50.223 trsvcid: 4420 00:15:50.223 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:50.224 traddr: 10.0.0.2 00:15:50.224 eflags: none 00:15:50.224 sectype: none 00:15:50.224 =====Discovery Log Entry 2====== 00:15:50.224 trtype: tcp 00:15:50.224 adrfam: ipv4 00:15:50.224 subtype: nvme subsystem 00:15:50.224 treq: not required 00:15:50.224 portid: 0 00:15:50.224 trsvcid: 4420 00:15:50.224 subnqn: nqn.2016-06.io.spdk:cnode2 00:15:50.224 traddr: 10.0.0.2 00:15:50.224 eflags: none 00:15:50.224 sectype: none 00:15:50.224 =====Discovery Log Entry 3====== 00:15:50.224 trtype: tcp 00:15:50.224 adrfam: ipv4 00:15:50.224 subtype: nvme subsystem 00:15:50.224 treq: not required 00:15:50.224 portid: 0 00:15:50.224 trsvcid: 4420 00:15:50.224 subnqn: nqn.2016-06.io.spdk:cnode3 00:15:50.224 traddr: 10.0.0.2 00:15:50.224 eflags: none 00:15:50.224 sectype: none 00:15:50.224 =====Discovery Log Entry 4====== 00:15:50.224 trtype: tcp 00:15:50.224 adrfam: ipv4 00:15:50.224 subtype: nvme subsystem 00:15:50.224 treq: not required 00:15:50.224 portid: 0 00:15:50.224 trsvcid: 4420 00:15:50.224 subnqn: nqn.2016-06.io.spdk:cnode4 00:15:50.224 traddr: 10.0.0.2 00:15:50.224 eflags: none 00:15:50.224 sectype: none 00:15:50.224 =====Discovery Log Entry 5====== 00:15:50.224 trtype: tcp 00:15:50.224 adrfam: ipv4 00:15:50.224 subtype: discovery subsystem referral 00:15:50.224 treq: not required 00:15:50.224 portid: 0 00:15:50.224 trsvcid: 4430 00:15:50.224 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:50.224 traddr: 10.0.0.2 00:15:50.224 eflags: none 00:15:50.224 sectype: none 00:15:50.224 Perform nvmf subsystem discovery via RPC 00:15:50.224 13:58:29 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:15:50.224 13:58:29 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:15:50.224 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.224 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.224 [2024-04-26 13:58:29.845712] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:50.224 [ 00:15:50.224 { 00:15:50.224 "allow_any_host": true, 00:15:50.224 "hosts": [], 00:15:50.224 "listen_addresses": [ 00:15:50.224 { 00:15:50.224 "adrfam": "IPv4", 00:15:50.224 "traddr": "10.0.0.2", 00:15:50.224 "transport": "TCP", 00:15:50.224 "trsvcid": "4420", 00:15:50.224 "trtype": "TCP" 00:15:50.224 } 00:15:50.224 ], 00:15:50.224 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:50.224 "subtype": "Discovery" 00:15:50.224 }, 00:15:50.224 { 00:15:50.224 "allow_any_host": true, 00:15:50.224 "hosts": [], 00:15:50.224 "listen_addresses": [ 00:15:50.224 { 00:15:50.224 "adrfam": "IPv4", 00:15:50.224 "traddr": "10.0.0.2", 00:15:50.224 "transport": "TCP", 00:15:50.224 "trsvcid": "4420", 00:15:50.224 "trtype": "TCP" 00:15:50.224 } 00:15:50.224 ], 00:15:50.224 "max_cntlid": 65519, 00:15:50.224 "max_namespaces": 32, 00:15:50.224 "min_cntlid": 1, 00:15:50.224 "model_number": "SPDK bdev Controller", 00:15:50.224 "namespaces": [ 00:15:50.224 { 00:15:50.224 "bdev_name": "Null1", 00:15:50.224 "name": "Null1", 00:15:50.224 "nguid": "5B0F2051C8C845B59FD4DE1C0F6F9455", 00:15:50.224 "nsid": 1, 00:15:50.224 "uuid": "5b0f2051-c8c8-45b5-9fd4-de1c0f6f9455" 00:15:50.224 } 00:15:50.224 ], 00:15:50.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:50.224 "serial_number": "SPDK00000000000001", 00:15:50.224 "subtype": "NVMe" 00:15:50.224 }, 00:15:50.224 { 00:15:50.224 "allow_any_host": true, 00:15:50.224 "hosts": [], 00:15:50.224 "listen_addresses": [ 00:15:50.224 { 00:15:50.224 "adrfam": "IPv4", 00:15:50.224 "traddr": "10.0.0.2", 00:15:50.224 "transport": "TCP", 00:15:50.224 "trsvcid": "4420", 00:15:50.224 "trtype": "TCP" 00:15:50.224 } 00:15:50.224 ], 00:15:50.224 "max_cntlid": 65519, 00:15:50.224 "max_namespaces": 32, 00:15:50.224 "min_cntlid": 1, 00:15:50.224 "model_number": "SPDK bdev Controller", 00:15:50.224 "namespaces": [ 00:15:50.224 { 00:15:50.224 "bdev_name": "Null2", 00:15:50.224 "name": "Null2", 00:15:50.224 "nguid": "6655A6164B3C44FAAEA8A5519A0E2E8C", 00:15:50.224 "nsid": 1, 00:15:50.224 "uuid": "6655a616-4b3c-44fa-aea8-a5519a0e2e8c" 00:15:50.224 } 00:15:50.224 ], 00:15:50.224 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:50.224 "serial_number": "SPDK00000000000002", 00:15:50.224 "subtype": "NVMe" 00:15:50.224 }, 00:15:50.224 { 00:15:50.224 "allow_any_host": true, 00:15:50.224 "hosts": [], 00:15:50.224 "listen_addresses": [ 00:15:50.224 { 00:15:50.224 "adrfam": "IPv4", 00:15:50.224 "traddr": "10.0.0.2", 00:15:50.224 "transport": "TCP", 00:15:50.224 "trsvcid": "4420", 00:15:50.224 "trtype": "TCP" 00:15:50.224 } 00:15:50.224 ], 00:15:50.224 "max_cntlid": 65519, 00:15:50.224 "max_namespaces": 32, 00:15:50.224 "min_cntlid": 1, 00:15:50.224 "model_number": "SPDK bdev Controller", 00:15:50.224 "namespaces": [ 00:15:50.224 { 00:15:50.224 "bdev_name": "Null3", 00:15:50.224 "name": "Null3", 00:15:50.224 "nguid": "8F9DA935B2F84320A65FF33BEE2DE58F", 00:15:50.224 "nsid": 1, 00:15:50.224 "uuid": "8f9da935-b2f8-4320-a65f-f33bee2de58f" 00:15:50.224 } 00:15:50.224 ], 00:15:50.224 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:15:50.224 "serial_number": "SPDK00000000000003", 00:15:50.224 "subtype": "NVMe" 00:15:50.224 }, 00:15:50.224 { 00:15:50.224 "allow_any_host": true, 00:15:50.224 "hosts": [], 00:15:50.224 "listen_addresses": [ 00:15:50.224 { 00:15:50.224 "adrfam": "IPv4", 00:15:50.224 "traddr": "10.0.0.2", 00:15:50.224 "transport": "TCP", 00:15:50.224 "trsvcid": "4420", 00:15:50.224 "trtype": "TCP" 00:15:50.224 } 00:15:50.224 ], 00:15:50.224 "max_cntlid": 65519, 00:15:50.224 "max_namespaces": 32, 00:15:50.224 "min_cntlid": 1, 00:15:50.224 "model_number": "SPDK bdev Controller", 00:15:50.224 "namespaces": [ 00:15:50.224 { 00:15:50.224 "bdev_name": "Null4", 00:15:50.224 "name": "Null4", 00:15:50.224 "nguid": "7E424628722045149716B9330B0803AE", 00:15:50.224 "nsid": 1, 00:15:50.224 "uuid": "7e424628-7220-4514-9716-b9330b0803ae" 00:15:50.224 } 00:15:50.224 ], 00:15:50.224 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:15:50.224 "serial_number": "SPDK00000000000004", 00:15:50.224 "subtype": "NVMe" 00:15:50.224 } 00:15:50.224 ] 00:15:50.224 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.224 13:58:29 -- target/discovery.sh@42 -- # seq 1 4 00:15:50.483 13:58:29 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:50.483 13:58:29 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.483 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.483 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.483 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.483 13:58:29 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:15:50.483 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.483 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.483 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.483 13:58:29 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:50.484 13:58:29 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:50.484 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.484 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.484 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.484 13:58:29 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:15:50.484 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.484 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.484 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.484 13:58:29 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:50.484 13:58:29 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:50.484 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.484 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.484 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.484 13:58:29 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:15:50.484 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.484 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.484 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.484 13:58:29 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:15:50.484 13:58:29 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:50.484 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.484 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.484 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.484 13:58:29 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:15:50.484 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.484 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.484 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.484 13:58:29 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:15:50.484 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.484 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.484 13:58:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.484 13:58:29 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:15:50.484 13:58:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:50.484 13:58:29 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:15:50.484 13:58:29 -- common/autotest_common.sh@10 -- # set +x 00:15:50.484 13:58:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:50.484 13:58:30 -- target/discovery.sh@49 -- # check_bdevs= 00:15:50.484 13:58:30 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:15:50.484 13:58:30 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:15:50.484 13:58:30 -- target/discovery.sh@57 -- # nvmftestfini 00:15:50.484 13:58:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:50.484 13:58:30 -- nvmf/common.sh@117 -- # sync 00:15:50.484 13:58:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:50.484 13:58:30 -- nvmf/common.sh@120 -- # set +e 00:15:50.484 13:58:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:50.484 13:58:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:50.484 rmmod nvme_tcp 00:15:50.484 rmmod nvme_fabrics 00:15:50.484 rmmod nvme_keyring 00:15:50.484 13:58:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:50.484 13:58:30 -- nvmf/common.sh@124 -- # set -e 00:15:50.484 13:58:30 -- nvmf/common.sh@125 -- # return 0 00:15:50.484 13:58:30 -- nvmf/common.sh@478 -- # '[' -n 67392 ']' 00:15:50.484 13:58:30 -- nvmf/common.sh@479 -- # killprocess 67392 00:15:50.484 13:58:30 -- common/autotest_common.sh@936 -- # '[' -z 67392 ']' 00:15:50.484 13:58:30 -- common/autotest_common.sh@940 -- # kill -0 67392 00:15:50.484 13:58:30 -- common/autotest_common.sh@941 -- # uname 00:15:50.484 13:58:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:50.484 13:58:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67392 00:15:50.742 13:58:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:50.742 13:58:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:50.742 killing process with pid 67392 00:15:50.742 13:58:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67392' 00:15:50.742 13:58:30 -- common/autotest_common.sh@955 -- # kill 67392 00:15:50.742 [2024-04-26 13:58:30.178714] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:15:50.742 13:58:30 -- common/autotest_common.sh@960 -- # wait 67392 00:15:52.122 13:58:31 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:52.122 13:58:31 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:52.122 13:58:31 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:52.122 13:58:31 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:52.122 13:58:31 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:52.122 13:58:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.122 13:58:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.122 13:58:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.122 13:58:31 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:52.122 00:15:52.122 real 0m3.817s 00:15:52.122 user 0m9.381s 00:15:52.122 sys 0m0.908s 00:15:52.122 ************************************ 00:15:52.122 END TEST nvmf_discovery 00:15:52.122 ************************************ 00:15:52.122 13:58:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:52.122 13:58:31 -- common/autotest_common.sh@10 -- # set +x 00:15:52.122 13:58:31 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:52.122 13:58:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:52.122 13:58:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:52.122 13:58:31 -- common/autotest_common.sh@10 -- # set +x 00:15:52.122 ************************************ 00:15:52.122 START TEST nvmf_referrals 00:15:52.122 ************************************ 00:15:52.122 13:58:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:15:52.381 * Looking for test storage... 00:15:52.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:52.381 13:58:31 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:52.381 13:58:31 -- nvmf/common.sh@7 -- # uname -s 00:15:52.381 13:58:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.381 13:58:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.381 13:58:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.381 13:58:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.381 13:58:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.381 13:58:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.381 13:58:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.381 13:58:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.381 13:58:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.381 13:58:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.381 13:58:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:15:52.381 13:58:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:15:52.381 13:58:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.381 13:58:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.381 13:58:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:52.381 13:58:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.381 13:58:31 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.381 13:58:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.381 13:58:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.381 13:58:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.381 13:58:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.381 13:58:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.381 13:58:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.381 13:58:31 -- paths/export.sh@5 -- # export PATH 00:15:52.381 13:58:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.381 13:58:31 -- nvmf/common.sh@47 -- # : 0 00:15:52.381 13:58:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:52.381 13:58:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:52.381 13:58:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.381 13:58:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.381 13:58:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.381 13:58:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:52.381 13:58:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:52.381 13:58:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:52.381 13:58:31 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:15:52.381 13:58:31 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:15:52.381 13:58:31 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:15:52.381 13:58:31 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:15:52.381 13:58:31 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:52.381 13:58:31 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:52.381 13:58:31 -- target/referrals.sh@37 -- # nvmftestinit 00:15:52.381 13:58:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:52.381 13:58:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.381 13:58:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:52.382 13:58:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:52.382 13:58:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:52.382 13:58:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.382 13:58:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:52.382 13:58:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.382 13:58:31 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:52.382 13:58:31 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:52.382 13:58:31 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:52.382 13:58:31 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:52.382 13:58:31 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:52.382 13:58:31 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:52.382 13:58:31 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.382 13:58:31 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.382 13:58:31 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:52.382 13:58:31 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:52.382 13:58:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:52.382 13:58:31 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:52.382 13:58:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:52.382 13:58:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.382 13:58:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:52.382 13:58:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:52.382 13:58:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:52.382 13:58:31 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:52.382 13:58:31 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:52.382 13:58:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:52.382 Cannot find device "nvmf_tgt_br" 00:15:52.382 13:58:32 -- nvmf/common.sh@155 -- # true 00:15:52.382 13:58:32 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.382 Cannot find device "nvmf_tgt_br2" 00:15:52.382 13:58:32 -- nvmf/common.sh@156 -- # true 00:15:52.382 13:58:32 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:52.382 13:58:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:52.641 Cannot find device "nvmf_tgt_br" 00:15:52.642 13:58:32 -- nvmf/common.sh@158 -- # true 00:15:52.642 13:58:32 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:52.642 Cannot find device "nvmf_tgt_br2" 00:15:52.642 13:58:32 -- nvmf/common.sh@159 -- # true 00:15:52.642 13:58:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:52.642 13:58:32 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:52.642 13:58:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.642 13:58:32 -- nvmf/common.sh@162 -- # true 00:15:52.642 13:58:32 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.642 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.642 13:58:32 -- nvmf/common.sh@163 -- # true 00:15:52.642 13:58:32 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:52.642 13:58:32 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:52.642 13:58:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:52.642 13:58:32 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:52.642 13:58:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:52.642 13:58:32 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:52.642 13:58:32 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:52.642 13:58:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:52.642 13:58:32 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:52.642 13:58:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:52.642 13:58:32 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:52.642 13:58:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:52.642 13:58:32 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:52.642 13:58:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:52.642 13:58:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:52.642 13:58:32 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:52.642 13:58:32 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:52.642 13:58:32 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:52.642 13:58:32 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:52.901 13:58:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:52.901 13:58:32 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:52.901 13:58:32 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:52.901 13:58:32 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:52.901 13:58:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:52.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:15:52.901 00:15:52.901 --- 10.0.0.2 ping statistics --- 00:15:52.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.901 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:15:52.901 13:58:32 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:52.901 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:52.901 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:15:52.901 00:15:52.901 --- 10.0.0.3 ping statistics --- 00:15:52.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.901 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:15:52.901 13:58:32 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:52.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:15:52.901 00:15:52.901 --- 10.0.0.1 ping statistics --- 00:15:52.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.901 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:15:52.901 13:58:32 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.901 13:58:32 -- nvmf/common.sh@422 -- # return 0 00:15:52.901 13:58:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:52.901 13:58:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.901 13:58:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:52.901 13:58:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:52.901 13:58:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.901 13:58:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:52.901 13:58:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:52.901 13:58:32 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:15:52.901 13:58:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:52.901 13:58:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:52.901 13:58:32 -- common/autotest_common.sh@10 -- # set +x 00:15:52.901 13:58:32 -- nvmf/common.sh@470 -- # nvmfpid=67646 00:15:52.901 13:58:32 -- nvmf/common.sh@471 -- # waitforlisten 67646 00:15:52.901 13:58:32 -- common/autotest_common.sh@817 -- # '[' -z 67646 ']' 00:15:52.901 13:58:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.901 13:58:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:52.901 13:58:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.901 13:58:32 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:52.901 13:58:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:52.901 13:58:32 -- common/autotest_common.sh@10 -- # set +x 00:15:52.901 [2024-04-26 13:58:32.519592] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:52.901 [2024-04-26 13:58:32.519902] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.160 [2024-04-26 13:58:32.692673] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:53.420 [2024-04-26 13:58:32.976948] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.420 [2024-04-26 13:58:32.977018] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.420 [2024-04-26 13:58:32.977043] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.420 [2024-04-26 13:58:32.977060] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.420 [2024-04-26 13:58:32.977078] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.420 [2024-04-26 13:58:32.977219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.420 [2024-04-26 13:58:32.977657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.420 [2024-04-26 13:58:32.978434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.420 [2024-04-26 13:58:32.978769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.994 13:58:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:53.994 13:58:33 -- common/autotest_common.sh@850 -- # return 0 00:15:53.994 13:58:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:53.994 13:58:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:53.994 13:58:33 -- common/autotest_common.sh@10 -- # set +x 00:15:53.994 13:58:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.994 13:58:33 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:53.994 13:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.994 13:58:33 -- common/autotest_common.sh@10 -- # set +x 00:15:53.994 [2024-04-26 13:58:33.482799] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.994 13:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.994 13:58:33 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:15:53.994 13:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.994 13:58:33 -- common/autotest_common.sh@10 -- # set +x 00:15:53.994 [2024-04-26 13:58:33.506969] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:15:53.994 13:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.994 13:58:33 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:15:53.994 13:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.994 13:58:33 -- common/autotest_common.sh@10 -- # set +x 00:15:53.994 13:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.994 13:58:33 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:15:53.994 13:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.994 13:58:33 -- common/autotest_common.sh@10 -- # set +x 00:15:53.994 13:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.994 13:58:33 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:15:53.994 13:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.994 13:58:33 -- common/autotest_common.sh@10 -- # set +x 00:15:53.994 13:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.994 13:58:33 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:53.994 13:58:33 -- target/referrals.sh@48 -- # jq length 00:15:53.994 13:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.994 13:58:33 -- common/autotest_common.sh@10 -- # set +x 00:15:53.995 13:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.995 13:58:33 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:15:53.995 13:58:33 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:15:53.995 13:58:33 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:53.995 13:58:33 -- target/referrals.sh@21 -- # sort 00:15:53.995 13:58:33 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:53.995 13:58:33 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:53.995 13:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:53.995 13:58:33 -- common/autotest_common.sh@10 -- # set +x 00:15:53.995 13:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:53.995 13:58:33 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:53.995 13:58:33 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:53.995 13:58:33 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:15:53.995 13:58:33 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:53.995 13:58:33 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:53.995 13:58:33 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:53.995 13:58:33 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:53.995 13:58:33 -- target/referrals.sh@26 -- # sort 00:15:54.254 13:58:33 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:15:54.254 13:58:33 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:15:54.254 13:58:33 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:15:54.254 13:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.254 13:58:33 -- common/autotest_common.sh@10 -- # set +x 00:15:54.254 13:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.254 13:58:33 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:15:54.254 13:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.254 13:58:33 -- common/autotest_common.sh@10 -- # set +x 00:15:54.254 13:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.254 13:58:33 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:15:54.254 13:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.254 13:58:33 -- common/autotest_common.sh@10 -- # set +x 00:15:54.254 13:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.254 13:58:33 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:54.254 13:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.254 13:58:33 -- common/autotest_common.sh@10 -- # set +x 00:15:54.254 13:58:33 -- target/referrals.sh@56 -- # jq length 00:15:54.254 13:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.254 13:58:33 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:15:54.254 13:58:33 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:15:54.254 13:58:33 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:54.254 13:58:33 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:54.254 13:58:33 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:54.254 13:58:33 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:54.254 13:58:33 -- target/referrals.sh@26 -- # sort 00:15:54.254 13:58:33 -- target/referrals.sh@26 -- # echo 00:15:54.254 13:58:33 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:15:54.254 13:58:33 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:15:54.254 13:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.254 13:58:33 -- common/autotest_common.sh@10 -- # set +x 00:15:54.254 13:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.254 13:58:33 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:54.254 13:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.254 13:58:33 -- common/autotest_common.sh@10 -- # set +x 00:15:54.254 13:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.254 13:58:33 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:15:54.254 13:58:33 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:54.254 13:58:33 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:54.254 13:58:33 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:54.254 13:58:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.254 13:58:33 -- target/referrals.sh@21 -- # sort 00:15:54.254 13:58:33 -- common/autotest_common.sh@10 -- # set +x 00:15:54.254 13:58:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.513 13:58:33 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:15:54.513 13:58:33 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:54.513 13:58:33 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:15:54.513 13:58:33 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:54.513 13:58:33 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:54.513 13:58:33 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:54.513 13:58:33 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:54.513 13:58:33 -- target/referrals.sh@26 -- # sort 00:15:54.513 13:58:34 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:15:54.513 13:58:34 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:15:54.513 13:58:34 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:15:54.513 13:58:34 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:54.513 13:58:34 -- target/referrals.sh@67 -- # jq -r .subnqn 00:15:54.513 13:58:34 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:54.513 13:58:34 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:54.513 13:58:34 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:54.513 13:58:34 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:15:54.513 13:58:34 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:54.513 13:58:34 -- target/referrals.sh@68 -- # jq -r .subnqn 00:15:54.513 13:58:34 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:54.513 13:58:34 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:54.513 13:58:34 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:54.513 13:58:34 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:15:54.513 13:58:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.513 13:58:34 -- common/autotest_common.sh@10 -- # set +x 00:15:54.773 13:58:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.773 13:58:34 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:15:54.773 13:58:34 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:15:54.773 13:58:34 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:54.773 13:58:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:54.773 13:58:34 -- common/autotest_common.sh@10 -- # set +x 00:15:54.773 13:58:34 -- target/referrals.sh@21 -- # sort 00:15:54.773 13:58:34 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:15:54.773 13:58:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:54.773 13:58:34 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:15:54.773 13:58:34 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:54.773 13:58:34 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:15:54.773 13:58:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:54.773 13:58:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:54.773 13:58:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:54.773 13:58:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:54.773 13:58:34 -- target/referrals.sh@26 -- # sort 00:15:54.773 13:58:34 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:15:54.773 13:58:34 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:15:54.773 13:58:34 -- target/referrals.sh@75 -- # jq -r .subnqn 00:15:54.773 13:58:34 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:15:54.773 13:58:34 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:15:54.773 13:58:34 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:54.773 13:58:34 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:15:54.773 13:58:34 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:15:54.773 13:58:34 -- target/referrals.sh@76 -- # jq -r .subnqn 00:15:54.773 13:58:34 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:15:54.773 13:58:34 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:15:54.773 13:58:34 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:54.773 13:58:34 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:15:55.032 13:58:34 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:15:55.032 13:58:34 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:15:55.032 13:58:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.033 13:58:34 -- common/autotest_common.sh@10 -- # set +x 00:15:55.033 13:58:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.033 13:58:34 -- target/referrals.sh@82 -- # jq length 00:15:55.033 13:58:34 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:15:55.033 13:58:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.033 13:58:34 -- common/autotest_common.sh@10 -- # set +x 00:15:55.033 13:58:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.033 13:58:34 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:15:55.033 13:58:34 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:15:55.033 13:58:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:15:55.033 13:58:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:15:55.033 13:58:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -a 10.0.0.2 -s 8009 -o json 00:15:55.033 13:58:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:15:55.033 13:58:34 -- target/referrals.sh@26 -- # sort 00:15:55.033 13:58:34 -- target/referrals.sh@26 -- # echo 00:15:55.033 13:58:34 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:15:55.033 13:58:34 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:15:55.033 13:58:34 -- target/referrals.sh@86 -- # nvmftestfini 00:15:55.033 13:58:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:55.033 13:58:34 -- nvmf/common.sh@117 -- # sync 00:15:55.033 13:58:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:55.033 13:58:34 -- nvmf/common.sh@120 -- # set +e 00:15:55.033 13:58:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:55.033 13:58:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:55.033 rmmod nvme_tcp 00:15:55.033 rmmod nvme_fabrics 00:15:55.033 rmmod nvme_keyring 00:15:55.033 13:58:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:55.033 13:58:34 -- nvmf/common.sh@124 -- # set -e 00:15:55.033 13:58:34 -- nvmf/common.sh@125 -- # return 0 00:15:55.033 13:58:34 -- nvmf/common.sh@478 -- # '[' -n 67646 ']' 00:15:55.033 13:58:34 -- nvmf/common.sh@479 -- # killprocess 67646 00:15:55.033 13:58:34 -- common/autotest_common.sh@936 -- # '[' -z 67646 ']' 00:15:55.033 13:58:34 -- common/autotest_common.sh@940 -- # kill -0 67646 00:15:55.033 13:58:34 -- common/autotest_common.sh@941 -- # uname 00:15:55.033 13:58:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:55.033 13:58:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67646 00:15:55.292 killing process with pid 67646 00:15:55.292 13:58:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:55.292 13:58:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:55.292 13:58:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67646' 00:15:55.292 13:58:34 -- common/autotest_common.sh@955 -- # kill 67646 00:15:55.292 13:58:34 -- common/autotest_common.sh@960 -- # wait 67646 00:15:56.673 13:58:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:56.673 13:58:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:56.673 13:58:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:56.673 13:58:36 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.673 13:58:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.673 13:58:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.673 13:58:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.673 13:58:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.673 13:58:36 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:56.673 00:15:56.673 real 0m4.271s 00:15:56.673 user 0m11.847s 00:15:56.673 sys 0m1.196s 00:15:56.673 13:58:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:56.673 13:58:36 -- common/autotest_common.sh@10 -- # set +x 00:15:56.673 ************************************ 00:15:56.673 END TEST nvmf_referrals 00:15:56.673 ************************************ 00:15:56.673 13:58:36 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:56.673 13:58:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:56.673 13:58:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:56.673 13:58:36 -- common/autotest_common.sh@10 -- # set +x 00:15:56.673 ************************************ 00:15:56.673 START TEST nvmf_connect_disconnect 00:15:56.673 ************************************ 00:15:56.673 13:58:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:15:56.673 * Looking for test storage... 00:15:56.933 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:56.933 13:58:36 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:56.933 13:58:36 -- nvmf/common.sh@7 -- # uname -s 00:15:56.933 13:58:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.933 13:58:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.933 13:58:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.933 13:58:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.933 13:58:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.933 13:58:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.933 13:58:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.933 13:58:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.933 13:58:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.933 13:58:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.933 13:58:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:15:56.933 13:58:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:15:56.933 13:58:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.933 13:58:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.933 13:58:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:56.933 13:58:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.933 13:58:36 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.933 13:58:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.933 13:58:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.933 13:58:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.933 13:58:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.933 13:58:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.933 13:58:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.933 13:58:36 -- paths/export.sh@5 -- # export PATH 00:15:56.933 13:58:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.933 13:58:36 -- nvmf/common.sh@47 -- # : 0 00:15:56.933 13:58:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:56.933 13:58:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:56.933 13:58:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.933 13:58:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.933 13:58:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.933 13:58:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:56.933 13:58:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:56.933 13:58:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:56.933 13:58:36 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:56.933 13:58:36 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:56.933 13:58:36 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:15:56.933 13:58:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:56.933 13:58:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.933 13:58:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:56.933 13:58:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:56.933 13:58:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:56.933 13:58:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.933 13:58:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.933 13:58:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.933 13:58:36 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:56.933 13:58:36 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:56.933 13:58:36 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:56.933 13:58:36 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:56.933 13:58:36 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:56.933 13:58:36 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:56.933 13:58:36 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.933 13:58:36 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.933 13:58:36 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:56.933 13:58:36 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:56.933 13:58:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:56.933 13:58:36 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:56.933 13:58:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:56.933 13:58:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.933 13:58:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:56.933 13:58:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:56.933 13:58:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:56.933 13:58:36 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:56.933 13:58:36 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:56.933 13:58:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:56.933 Cannot find device "nvmf_tgt_br" 00:15:56.933 13:58:36 -- nvmf/common.sh@155 -- # true 00:15:56.933 13:58:36 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.933 Cannot find device "nvmf_tgt_br2" 00:15:56.933 13:58:36 -- nvmf/common.sh@156 -- # true 00:15:56.933 13:58:36 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:56.933 13:58:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:56.933 Cannot find device "nvmf_tgt_br" 00:15:56.933 13:58:36 -- nvmf/common.sh@158 -- # true 00:15:56.933 13:58:36 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:56.933 Cannot find device "nvmf_tgt_br2" 00:15:56.933 13:58:36 -- nvmf/common.sh@159 -- # true 00:15:56.933 13:58:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:56.933 13:58:36 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:56.933 13:58:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.933 13:58:36 -- nvmf/common.sh@162 -- # true 00:15:56.933 13:58:36 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.933 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.933 13:58:36 -- nvmf/common.sh@163 -- # true 00:15:56.933 13:58:36 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:56.933 13:58:36 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:56.933 13:58:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:57.193 13:58:36 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:57.194 13:58:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:57.194 13:58:36 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:57.194 13:58:36 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:57.194 13:58:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:57.194 13:58:36 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:57.194 13:58:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:57.194 13:58:36 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:57.194 13:58:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:57.194 13:58:36 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:57.194 13:58:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:57.194 13:58:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:57.194 13:58:36 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:57.194 13:58:36 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:57.194 13:58:36 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:57.194 13:58:36 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:57.194 13:58:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:57.194 13:58:36 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:57.194 13:58:36 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:57.194 13:58:36 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:57.194 13:58:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:57.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:15:57.194 00:15:57.194 --- 10.0.0.2 ping statistics --- 00:15:57.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.194 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:15:57.194 13:58:36 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:57.194 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:57.194 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:15:57.194 00:15:57.194 --- 10.0.0.3 ping statistics --- 00:15:57.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.194 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:15:57.194 13:58:36 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:57.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:15:57.194 00:15:57.194 --- 10.0.0.1 ping statistics --- 00:15:57.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.194 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:15:57.194 13:58:36 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.194 13:58:36 -- nvmf/common.sh@422 -- # return 0 00:15:57.194 13:58:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:57.194 13:58:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.454 13:58:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:57.454 13:58:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:57.454 13:58:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.454 13:58:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:57.454 13:58:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:57.454 13:58:36 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:15:57.454 13:58:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:57.454 13:58:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:57.454 13:58:36 -- common/autotest_common.sh@10 -- # set +x 00:15:57.454 13:58:36 -- nvmf/common.sh@470 -- # nvmfpid=67965 00:15:57.454 13:58:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:57.454 13:58:36 -- nvmf/common.sh@471 -- # waitforlisten 67965 00:15:57.454 13:58:36 -- common/autotest_common.sh@817 -- # '[' -z 67965 ']' 00:15:57.454 13:58:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.454 13:58:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:57.454 13:58:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.454 13:58:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:57.454 13:58:36 -- common/autotest_common.sh@10 -- # set +x 00:15:57.454 [2024-04-26 13:58:37.009760] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:15:57.454 [2024-04-26 13:58:37.009887] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.713 [2024-04-26 13:58:37.185329] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:57.972 [2024-04-26 13:58:37.439632] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.972 [2024-04-26 13:58:37.439699] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.972 [2024-04-26 13:58:37.439721] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.972 [2024-04-26 13:58:37.439733] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.972 [2024-04-26 13:58:37.439747] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.972 [2024-04-26 13:58:37.439867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.972 [2024-04-26 13:58:37.440117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.972 [2024-04-26 13:58:37.440669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.972 [2024-04-26 13:58:37.440701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.231 13:58:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:58.231 13:58:37 -- common/autotest_common.sh@850 -- # return 0 00:15:58.231 13:58:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:58.231 13:58:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:58.231 13:58:37 -- common/autotest_common.sh@10 -- # set +x 00:15:58.490 13:58:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.490 13:58:37 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:58.490 13:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.490 13:58:37 -- common/autotest_common.sh@10 -- # set +x 00:15:58.490 [2024-04-26 13:58:37.960727] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.490 13:58:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.490 13:58:37 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:15:58.490 13:58:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.490 13:58:37 -- common/autotest_common.sh@10 -- # set +x 00:15:58.490 13:58:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.490 13:58:38 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:15:58.490 13:58:38 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:58.490 13:58:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.490 13:58:38 -- common/autotest_common.sh@10 -- # set +x 00:15:58.490 13:58:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.490 13:58:38 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:58.490 13:58:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.490 13:58:38 -- common/autotest_common.sh@10 -- # set +x 00:15:58.490 13:58:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.490 13:58:38 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.490 13:58:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.490 13:58:38 -- common/autotest_common.sh@10 -- # set +x 00:15:58.490 [2024-04-26 13:58:38.121521] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.490 13:58:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.490 13:58:38 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:15:58.490 13:58:38 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:15:58.490 13:58:38 -- target/connect_disconnect.sh@34 -- # set +x 00:16:01.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.921 13:58:49 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:09.921 13:58:49 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:09.921 13:58:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:09.921 13:58:49 -- nvmf/common.sh@117 -- # sync 00:16:09.921 13:58:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:09.921 13:58:49 -- nvmf/common.sh@120 -- # set +e 00:16:09.921 13:58:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:09.921 13:58:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:09.921 rmmod nvme_tcp 00:16:09.921 rmmod nvme_fabrics 00:16:09.921 rmmod nvme_keyring 00:16:09.921 13:58:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:09.921 13:58:49 -- nvmf/common.sh@124 -- # set -e 00:16:09.921 13:58:49 -- nvmf/common.sh@125 -- # return 0 00:16:09.921 13:58:49 -- nvmf/common.sh@478 -- # '[' -n 67965 ']' 00:16:09.921 13:58:49 -- nvmf/common.sh@479 -- # killprocess 67965 00:16:09.921 13:58:49 -- common/autotest_common.sh@936 -- # '[' -z 67965 ']' 00:16:09.921 13:58:49 -- common/autotest_common.sh@940 -- # kill -0 67965 00:16:09.921 13:58:49 -- common/autotest_common.sh@941 -- # uname 00:16:09.921 13:58:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:09.921 13:58:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67965 00:16:09.921 13:58:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:09.921 13:58:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:09.921 13:58:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67965' 00:16:09.921 killing process with pid 67965 00:16:09.921 13:58:49 -- common/autotest_common.sh@955 -- # kill 67965 00:16:09.921 13:58:49 -- common/autotest_common.sh@960 -- # wait 67965 00:16:11.817 13:58:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:11.817 13:58:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:11.817 13:58:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:11.817 13:58:51 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.817 13:58:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:11.817 13:58:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.817 13:58:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.817 13:58:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:11.817 13:58:51 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:11.817 00:16:11.817 real 0m15.032s 00:16:11.817 user 0m52.322s 00:16:11.817 sys 0m2.773s 00:16:11.817 13:58:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:11.817 13:58:51 -- common/autotest_common.sh@10 -- # set +x 00:16:11.817 ************************************ 00:16:11.817 END TEST nvmf_connect_disconnect 00:16:11.817 ************************************ 00:16:11.817 13:58:51 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:11.817 13:58:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:11.817 13:58:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:11.817 13:58:51 -- common/autotest_common.sh@10 -- # set +x 00:16:11.817 ************************************ 00:16:11.817 START TEST nvmf_multitarget 00:16:11.817 ************************************ 00:16:11.817 13:58:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:12.076 * Looking for test storage... 00:16:12.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:12.076 13:58:51 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:12.076 13:58:51 -- nvmf/common.sh@7 -- # uname -s 00:16:12.076 13:58:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.076 13:58:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.076 13:58:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.076 13:58:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.076 13:58:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.076 13:58:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.076 13:58:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.076 13:58:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.076 13:58:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.076 13:58:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.076 13:58:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:16:12.076 13:58:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:16:12.076 13:58:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.076 13:58:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.076 13:58:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:12.076 13:58:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.076 13:58:51 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:12.076 13:58:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.076 13:58:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.076 13:58:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.076 13:58:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.076 13:58:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.076 13:58:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.076 13:58:51 -- paths/export.sh@5 -- # export PATH 00:16:12.076 13:58:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.076 13:58:51 -- nvmf/common.sh@47 -- # : 0 00:16:12.076 13:58:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:12.076 13:58:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:12.076 13:58:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.076 13:58:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.076 13:58:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.076 13:58:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:12.076 13:58:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:12.076 13:58:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:12.076 13:58:51 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:16:12.076 13:58:51 -- target/multitarget.sh@15 -- # nvmftestinit 00:16:12.076 13:58:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:12.076 13:58:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.076 13:58:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:12.076 13:58:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:12.076 13:58:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:12.076 13:58:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.076 13:58:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.076 13:58:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.076 13:58:51 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:12.076 13:58:51 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:12.076 13:58:51 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:12.076 13:58:51 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:12.076 13:58:51 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:12.076 13:58:51 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:12.077 13:58:51 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.077 13:58:51 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.077 13:58:51 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:12.077 13:58:51 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:12.077 13:58:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:12.077 13:58:51 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:12.077 13:58:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:12.077 13:58:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.077 13:58:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:12.077 13:58:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:12.077 13:58:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:12.077 13:58:51 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:12.077 13:58:51 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:12.077 13:58:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:12.077 Cannot find device "nvmf_tgt_br" 00:16:12.077 13:58:51 -- nvmf/common.sh@155 -- # true 00:16:12.077 13:58:51 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:12.077 Cannot find device "nvmf_tgt_br2" 00:16:12.077 13:58:51 -- nvmf/common.sh@156 -- # true 00:16:12.077 13:58:51 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:12.077 13:58:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:12.077 Cannot find device "nvmf_tgt_br" 00:16:12.077 13:58:51 -- nvmf/common.sh@158 -- # true 00:16:12.077 13:58:51 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:12.077 Cannot find device "nvmf_tgt_br2" 00:16:12.077 13:58:51 -- nvmf/common.sh@159 -- # true 00:16:12.077 13:58:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:12.077 13:58:51 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:12.336 13:58:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:12.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.336 13:58:51 -- nvmf/common.sh@162 -- # true 00:16:12.336 13:58:51 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:12.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.336 13:58:51 -- nvmf/common.sh@163 -- # true 00:16:12.336 13:58:51 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:12.336 13:58:51 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:12.336 13:58:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:12.336 13:58:51 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:12.336 13:58:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:12.336 13:58:51 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:12.336 13:58:51 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:12.336 13:58:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:12.336 13:58:51 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:12.336 13:58:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:12.336 13:58:51 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:12.336 13:58:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:12.336 13:58:51 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:12.336 13:58:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:12.336 13:58:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:12.336 13:58:51 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:12.336 13:58:51 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:12.336 13:58:51 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:12.336 13:58:51 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:12.336 13:58:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:12.336 13:58:51 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:12.336 13:58:51 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:12.336 13:58:51 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:12.336 13:58:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:12.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:12.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:16:12.336 00:16:12.336 --- 10.0.0.2 ping statistics --- 00:16:12.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.336 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:12.336 13:58:51 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:12.336 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:12.336 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:16:12.336 00:16:12.336 --- 10.0.0.3 ping statistics --- 00:16:12.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.336 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:12.336 13:58:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:12.596 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:12.596 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:16:12.596 00:16:12.596 --- 10.0.0.1 ping statistics --- 00:16:12.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.596 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:12.596 13:58:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:12.596 13:58:52 -- nvmf/common.sh@422 -- # return 0 00:16:12.596 13:58:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:12.596 13:58:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:12.596 13:58:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:12.596 13:58:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:12.596 13:58:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:12.596 13:58:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:12.596 13:58:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:12.596 13:58:52 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:12.596 13:58:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:12.596 13:58:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:12.596 13:58:52 -- common/autotest_common.sh@10 -- # set +x 00:16:12.596 13:58:52 -- nvmf/common.sh@470 -- # nvmfpid=68394 00:16:12.596 13:58:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:12.596 13:58:52 -- nvmf/common.sh@471 -- # waitforlisten 68394 00:16:12.596 13:58:52 -- common/autotest_common.sh@817 -- # '[' -z 68394 ']' 00:16:12.596 13:58:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.596 13:58:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:12.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.596 13:58:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.596 13:58:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:12.596 13:58:52 -- common/autotest_common.sh@10 -- # set +x 00:16:12.596 [2024-04-26 13:58:52.156812] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:12.596 [2024-04-26 13:58:52.156938] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.855 [2024-04-26 13:58:52.334792] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:13.114 [2024-04-26 13:58:52.599139] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.114 [2024-04-26 13:58:52.599207] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.114 [2024-04-26 13:58:52.599224] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.114 [2024-04-26 13:58:52.599236] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.114 [2024-04-26 13:58:52.599249] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.114 [2024-04-26 13:58:52.599449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.114 [2024-04-26 13:58:52.600241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.114 [2024-04-26 13:58:52.600398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.114 [2024-04-26 13:58:52.600433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:13.373 13:58:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:13.373 13:58:53 -- common/autotest_common.sh@850 -- # return 0 00:16:13.373 13:58:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:13.373 13:58:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:13.373 13:58:53 -- common/autotest_common.sh@10 -- # set +x 00:16:13.670 13:58:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:13.670 13:58:53 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:13.670 13:58:53 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:13.670 13:58:53 -- target/multitarget.sh@21 -- # jq length 00:16:13.670 13:58:53 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:13.670 13:58:53 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:13.670 "nvmf_tgt_1" 00:16:13.670 13:58:53 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:13.929 "nvmf_tgt_2" 00:16:13.929 13:58:53 -- target/multitarget.sh@28 -- # jq length 00:16:13.929 13:58:53 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:13.929 13:58:53 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:13.929 13:58:53 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:14.187 true 00:16:14.187 13:58:53 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:14.187 true 00:16:14.187 13:58:53 -- target/multitarget.sh@35 -- # jq length 00:16:14.187 13:58:53 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:14.187 13:58:53 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:14.187 13:58:53 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:14.187 13:58:53 -- target/multitarget.sh@41 -- # nvmftestfini 00:16:14.187 13:58:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:14.187 13:58:53 -- nvmf/common.sh@117 -- # sync 00:16:14.447 13:58:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.447 13:58:53 -- nvmf/common.sh@120 -- # set +e 00:16:14.447 13:58:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.447 13:58:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.447 rmmod nvme_tcp 00:16:14.447 rmmod nvme_fabrics 00:16:14.447 rmmod nvme_keyring 00:16:14.447 13:58:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.447 13:58:53 -- nvmf/common.sh@124 -- # set -e 00:16:14.447 13:58:53 -- nvmf/common.sh@125 -- # return 0 00:16:14.447 13:58:53 -- nvmf/common.sh@478 -- # '[' -n 68394 ']' 00:16:14.447 13:58:53 -- nvmf/common.sh@479 -- # killprocess 68394 00:16:14.447 13:58:53 -- common/autotest_common.sh@936 -- # '[' -z 68394 ']' 00:16:14.447 13:58:53 -- common/autotest_common.sh@940 -- # kill -0 68394 00:16:14.447 13:58:53 -- common/autotest_common.sh@941 -- # uname 00:16:14.447 13:58:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:14.447 13:58:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68394 00:16:14.447 13:58:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:14.447 13:58:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:14.447 13:58:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68394' 00:16:14.447 killing process with pid 68394 00:16:14.447 13:58:53 -- common/autotest_common.sh@955 -- # kill 68394 00:16:14.447 13:58:53 -- common/autotest_common.sh@960 -- # wait 68394 00:16:15.823 13:58:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:15.823 13:58:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:15.823 13:58:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:15.823 13:58:55 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:15.823 13:58:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:15.823 13:58:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.823 13:58:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.823 13:58:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.823 13:58:55 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:15.823 00:16:15.823 real 0m3.916s 00:16:15.823 user 0m10.625s 00:16:15.823 sys 0m0.985s 00:16:15.823 13:58:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:15.823 13:58:55 -- common/autotest_common.sh@10 -- # set +x 00:16:15.823 ************************************ 00:16:15.823 END TEST nvmf_multitarget 00:16:15.823 ************************************ 00:16:15.823 13:58:55 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:15.823 13:58:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:15.823 13:58:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:15.823 13:58:55 -- common/autotest_common.sh@10 -- # set +x 00:16:15.823 ************************************ 00:16:15.823 START TEST nvmf_rpc 00:16:15.823 ************************************ 00:16:15.823 13:58:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:16.083 * Looking for test storage... 00:16:16.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:16.083 13:58:55 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:16.083 13:58:55 -- nvmf/common.sh@7 -- # uname -s 00:16:16.083 13:58:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.083 13:58:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.083 13:58:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.083 13:58:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.083 13:58:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.083 13:58:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.083 13:58:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.083 13:58:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.083 13:58:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.083 13:58:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.083 13:58:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:16:16.083 13:58:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:16:16.083 13:58:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.083 13:58:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.083 13:58:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:16.083 13:58:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:16.083 13:58:55 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:16.083 13:58:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.083 13:58:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.083 13:58:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.083 13:58:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.083 13:58:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.083 13:58:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.083 13:58:55 -- paths/export.sh@5 -- # export PATH 00:16:16.083 13:58:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.083 13:58:55 -- nvmf/common.sh@47 -- # : 0 00:16:16.083 13:58:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:16.083 13:58:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:16.083 13:58:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:16.083 13:58:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.083 13:58:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.083 13:58:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:16.083 13:58:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:16.083 13:58:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:16.083 13:58:55 -- target/rpc.sh@11 -- # loops=5 00:16:16.083 13:58:55 -- target/rpc.sh@23 -- # nvmftestinit 00:16:16.083 13:58:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:16.083 13:58:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.083 13:58:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:16.083 13:58:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:16.083 13:58:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:16.083 13:58:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.083 13:58:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.083 13:58:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.083 13:58:55 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:16.083 13:58:55 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:16.083 13:58:55 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:16.083 13:58:55 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:16.083 13:58:55 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:16.083 13:58:55 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:16.083 13:58:55 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.083 13:58:55 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.083 13:58:55 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:16.083 13:58:55 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:16.083 13:58:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:16.083 13:58:55 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:16.083 13:58:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:16.083 13:58:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.083 13:58:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:16.083 13:58:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:16.083 13:58:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:16.083 13:58:55 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:16.083 13:58:55 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:16.083 13:58:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:16.083 Cannot find device "nvmf_tgt_br" 00:16:16.083 13:58:55 -- nvmf/common.sh@155 -- # true 00:16:16.083 13:58:55 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:16.083 Cannot find device "nvmf_tgt_br2" 00:16:16.083 13:58:55 -- nvmf/common.sh@156 -- # true 00:16:16.083 13:58:55 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:16.083 13:58:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:16.083 Cannot find device "nvmf_tgt_br" 00:16:16.083 13:58:55 -- nvmf/common.sh@158 -- # true 00:16:16.083 13:58:55 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:16.083 Cannot find device "nvmf_tgt_br2" 00:16:16.083 13:58:55 -- nvmf/common.sh@159 -- # true 00:16:16.083 13:58:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:16.343 13:58:55 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:16.343 13:58:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:16.343 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.343 13:58:55 -- nvmf/common.sh@162 -- # true 00:16:16.343 13:58:55 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:16.343 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.343 13:58:55 -- nvmf/common.sh@163 -- # true 00:16:16.343 13:58:55 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:16.343 13:58:55 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:16.344 13:58:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:16.344 13:58:55 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:16.344 13:58:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:16.344 13:58:55 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:16.344 13:58:55 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:16.344 13:58:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:16.344 13:58:55 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:16.344 13:58:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:16.344 13:58:55 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:16.344 13:58:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:16.344 13:58:55 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:16.344 13:58:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:16.344 13:58:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:16.344 13:58:55 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:16.344 13:58:55 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:16.344 13:58:55 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:16.344 13:58:55 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:16.344 13:58:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:16.344 13:58:56 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:16.603 13:58:56 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:16.603 13:58:56 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:16.603 13:58:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:16.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:16:16.604 00:16:16.604 --- 10.0.0.2 ping statistics --- 00:16:16.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.604 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:16:16.604 13:58:56 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:16.604 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:16.604 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:16:16.604 00:16:16.604 --- 10.0.0.3 ping statistics --- 00:16:16.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.604 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:16.604 13:58:56 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:16.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:16:16.604 00:16:16.604 --- 10.0.0.1 ping statistics --- 00:16:16.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.604 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:16.604 13:58:56 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.604 13:58:56 -- nvmf/common.sh@422 -- # return 0 00:16:16.604 13:58:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:16.604 13:58:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.604 13:58:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:16.604 13:58:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:16.604 13:58:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.604 13:58:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:16.604 13:58:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:16.604 13:58:56 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:16.604 13:58:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:16.604 13:58:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:16.604 13:58:56 -- common/autotest_common.sh@10 -- # set +x 00:16:16.604 13:58:56 -- nvmf/common.sh@470 -- # nvmfpid=68646 00:16:16.604 13:58:56 -- nvmf/common.sh@471 -- # waitforlisten 68646 00:16:16.604 13:58:56 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:16.604 13:58:56 -- common/autotest_common.sh@817 -- # '[' -z 68646 ']' 00:16:16.604 13:58:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.604 13:58:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:16.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.604 13:58:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.604 13:58:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:16.604 13:58:56 -- common/autotest_common.sh@10 -- # set +x 00:16:16.604 [2024-04-26 13:58:56.200364] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:16.604 [2024-04-26 13:58:56.200505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.863 [2024-04-26 13:58:56.373854] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:17.121 [2024-04-26 13:58:56.616924] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.121 [2024-04-26 13:58:56.616985] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.121 [2024-04-26 13:58:56.617001] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.121 [2024-04-26 13:58:56.617012] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.121 [2024-04-26 13:58:56.617024] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.121 [2024-04-26 13:58:56.617292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.121 [2024-04-26 13:58:56.617423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.121 [2024-04-26 13:58:56.618319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.121 [2024-04-26 13:58:56.618364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.380 13:58:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:17.380 13:58:57 -- common/autotest_common.sh@850 -- # return 0 00:16:17.380 13:58:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:17.380 13:58:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:17.380 13:58:57 -- common/autotest_common.sh@10 -- # set +x 00:16:17.639 13:58:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.640 13:58:57 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:17.640 13:58:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.640 13:58:57 -- common/autotest_common.sh@10 -- # set +x 00:16:17.640 13:58:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.640 13:58:57 -- target/rpc.sh@26 -- # stats='{ 00:16:17.640 "poll_groups": [ 00:16:17.640 { 00:16:17.640 "admin_qpairs": 0, 00:16:17.640 "completed_nvme_io": 0, 00:16:17.640 "current_admin_qpairs": 0, 00:16:17.640 "current_io_qpairs": 0, 00:16:17.640 "io_qpairs": 0, 00:16:17.640 "name": "nvmf_tgt_poll_group_0", 00:16:17.640 "pending_bdev_io": 0, 00:16:17.640 "transports": [] 00:16:17.640 }, 00:16:17.640 { 00:16:17.640 "admin_qpairs": 0, 00:16:17.640 "completed_nvme_io": 0, 00:16:17.640 "current_admin_qpairs": 0, 00:16:17.640 "current_io_qpairs": 0, 00:16:17.640 "io_qpairs": 0, 00:16:17.640 "name": "nvmf_tgt_poll_group_1", 00:16:17.640 "pending_bdev_io": 0, 00:16:17.640 "transports": [] 00:16:17.640 }, 00:16:17.640 { 00:16:17.640 "admin_qpairs": 0, 00:16:17.640 "completed_nvme_io": 0, 00:16:17.640 "current_admin_qpairs": 0, 00:16:17.640 "current_io_qpairs": 0, 00:16:17.640 "io_qpairs": 0, 00:16:17.640 "name": "nvmf_tgt_poll_group_2", 00:16:17.640 "pending_bdev_io": 0, 00:16:17.640 "transports": [] 00:16:17.640 }, 00:16:17.640 { 00:16:17.640 "admin_qpairs": 0, 00:16:17.640 "completed_nvme_io": 0, 00:16:17.640 "current_admin_qpairs": 0, 00:16:17.640 "current_io_qpairs": 0, 00:16:17.640 "io_qpairs": 0, 00:16:17.640 "name": "nvmf_tgt_poll_group_3", 00:16:17.640 "pending_bdev_io": 0, 00:16:17.640 "transports": [] 00:16:17.640 } 00:16:17.640 ], 00:16:17.640 "tick_rate": 2490000000 00:16:17.640 }' 00:16:17.640 13:58:57 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:17.640 13:58:57 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:17.640 13:58:57 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:17.640 13:58:57 -- target/rpc.sh@15 -- # wc -l 00:16:17.640 13:58:57 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:17.640 13:58:57 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:17.640 13:58:57 -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:17.640 13:58:57 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:17.640 13:58:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.640 13:58:57 -- common/autotest_common.sh@10 -- # set +x 00:16:17.640 [2024-04-26 13:58:57.228434] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.640 13:58:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.640 13:58:57 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:17.640 13:58:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.640 13:58:57 -- common/autotest_common.sh@10 -- # set +x 00:16:17.640 13:58:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.640 13:58:57 -- target/rpc.sh@33 -- # stats='{ 00:16:17.640 "poll_groups": [ 00:16:17.640 { 00:16:17.640 "admin_qpairs": 0, 00:16:17.640 "completed_nvme_io": 0, 00:16:17.640 "current_admin_qpairs": 0, 00:16:17.640 "current_io_qpairs": 0, 00:16:17.640 "io_qpairs": 0, 00:16:17.640 "name": "nvmf_tgt_poll_group_0", 00:16:17.640 "pending_bdev_io": 0, 00:16:17.640 "transports": [ 00:16:17.640 { 00:16:17.640 "trtype": "TCP" 00:16:17.640 } 00:16:17.640 ] 00:16:17.640 }, 00:16:17.640 { 00:16:17.640 "admin_qpairs": 0, 00:16:17.640 "completed_nvme_io": 0, 00:16:17.640 "current_admin_qpairs": 0, 00:16:17.640 "current_io_qpairs": 0, 00:16:17.640 "io_qpairs": 0, 00:16:17.640 "name": "nvmf_tgt_poll_group_1", 00:16:17.640 "pending_bdev_io": 0, 00:16:17.640 "transports": [ 00:16:17.640 { 00:16:17.640 "trtype": "TCP" 00:16:17.640 } 00:16:17.640 ] 00:16:17.640 }, 00:16:17.640 { 00:16:17.640 "admin_qpairs": 0, 00:16:17.640 "completed_nvme_io": 0, 00:16:17.640 "current_admin_qpairs": 0, 00:16:17.640 "current_io_qpairs": 0, 00:16:17.640 "io_qpairs": 0, 00:16:17.640 "name": "nvmf_tgt_poll_group_2", 00:16:17.640 "pending_bdev_io": 0, 00:16:17.640 "transports": [ 00:16:17.640 { 00:16:17.640 "trtype": "TCP" 00:16:17.640 } 00:16:17.640 ] 00:16:17.640 }, 00:16:17.640 { 00:16:17.640 "admin_qpairs": 0, 00:16:17.640 "completed_nvme_io": 0, 00:16:17.640 "current_admin_qpairs": 0, 00:16:17.640 "current_io_qpairs": 0, 00:16:17.640 "io_qpairs": 0, 00:16:17.640 "name": "nvmf_tgt_poll_group_3", 00:16:17.640 "pending_bdev_io": 0, 00:16:17.640 "transports": [ 00:16:17.640 { 00:16:17.640 "trtype": "TCP" 00:16:17.640 } 00:16:17.640 ] 00:16:17.640 } 00:16:17.640 ], 00:16:17.640 "tick_rate": 2490000000 00:16:17.640 }' 00:16:17.640 13:58:57 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:17.640 13:58:57 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:17.640 13:58:57 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:17.640 13:58:57 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:17.900 13:58:57 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:17.900 13:58:57 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:17.900 13:58:57 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:17.900 13:58:57 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:17.900 13:58:57 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:17.900 13:58:57 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:17.900 13:58:57 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:17.900 13:58:57 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:17.900 13:58:57 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:17.900 13:58:57 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:17.900 13:58:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.900 13:58:57 -- common/autotest_common.sh@10 -- # set +x 00:16:17.900 Malloc1 00:16:17.900 13:58:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.900 13:58:57 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:17.900 13:58:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.900 13:58:57 -- common/autotest_common.sh@10 -- # set +x 00:16:17.900 13:58:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.900 13:58:57 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:17.900 13:58:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.900 13:58:57 -- common/autotest_common.sh@10 -- # set +x 00:16:17.900 13:58:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.900 13:58:57 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:17.900 13:58:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.900 13:58:57 -- common/autotest_common.sh@10 -- # set +x 00:16:17.900 13:58:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.900 13:58:57 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.900 13:58:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.900 13:58:57 -- common/autotest_common.sh@10 -- # set +x 00:16:17.900 [2024-04-26 13:58:57.514662] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.900 13:58:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:17.900 13:58:57 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -a 10.0.0.2 -s 4420 00:16:17.900 13:58:57 -- common/autotest_common.sh@638 -- # local es=0 00:16:17.900 13:58:57 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -a 10.0.0.2 -s 4420 00:16:17.900 13:58:57 -- common/autotest_common.sh@626 -- # local arg=nvme 00:16:17.900 13:58:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:17.900 13:58:57 -- common/autotest_common.sh@630 -- # type -t nvme 00:16:17.900 13:58:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:17.900 13:58:57 -- common/autotest_common.sh@632 -- # type -P nvme 00:16:17.900 13:58:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:17.900 13:58:57 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:16:17.900 13:58:57 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:16:17.900 13:58:57 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -a 10.0.0.2 -s 4420 00:16:17.900 [2024-04-26 13:58:57.551938] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604' 00:16:17.900 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:17.900 could not add new controller: failed to write to nvme-fabrics device 00:16:17.900 13:58:57 -- common/autotest_common.sh@641 -- # es=1 00:16:17.900 13:58:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:17.900 13:58:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:17.900 13:58:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:17.900 13:58:57 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:16:17.900 13:58:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:17.900 13:58:57 -- common/autotest_common.sh@10 -- # set +x 00:16:18.159 13:58:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.159 13:58:57 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:18.159 13:58:57 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:18.159 13:58:57 -- common/autotest_common.sh@1184 -- # local i=0 00:16:18.159 13:58:57 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.159 13:58:57 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:18.159 13:58:57 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:20.709 13:58:59 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:20.709 13:58:59 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:20.709 13:58:59 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:20.709 13:58:59 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:20.709 13:58:59 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.709 13:58:59 -- common/autotest_common.sh@1194 -- # return 0 00:16:20.709 13:58:59 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:20.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.709 13:58:59 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:20.709 13:58:59 -- common/autotest_common.sh@1205 -- # local i=0 00:16:20.710 13:58:59 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:20.710 13:58:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.710 13:58:59 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:20.710 13:58:59 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:20.710 13:58:59 -- common/autotest_common.sh@1217 -- # return 0 00:16:20.710 13:58:59 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:16:20.710 13:58:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.710 13:58:59 -- common/autotest_common.sh@10 -- # set +x 00:16:20.710 13:58:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.710 13:58:59 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:20.710 13:58:59 -- common/autotest_common.sh@638 -- # local es=0 00:16:20.710 13:58:59 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:20.710 13:58:59 -- common/autotest_common.sh@626 -- # local arg=nvme 00:16:20.710 13:58:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:20.710 13:58:59 -- common/autotest_common.sh@630 -- # type -t nvme 00:16:20.710 13:58:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:20.710 13:58:59 -- common/autotest_common.sh@632 -- # type -P nvme 00:16:20.710 13:58:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:20.710 13:58:59 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:16:20.710 13:58:59 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:16:20.710 13:58:59 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:20.710 [2024-04-26 13:59:00.009969] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604' 00:16:20.710 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:20.710 could not add new controller: failed to write to nvme-fabrics device 00:16:20.710 13:59:00 -- common/autotest_common.sh@641 -- # es=1 00:16:20.710 13:59:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:20.710 13:59:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:20.710 13:59:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:20.710 13:59:00 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:20.710 13:59:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.710 13:59:00 -- common/autotest_common.sh@10 -- # set +x 00:16:20.710 13:59:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.710 13:59:00 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:20.710 13:59:00 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:20.710 13:59:00 -- common/autotest_common.sh@1184 -- # local i=0 00:16:20.710 13:59:00 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:20.710 13:59:00 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:20.710 13:59:00 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:22.614 13:59:02 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:22.614 13:59:02 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:22.614 13:59:02 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:22.614 13:59:02 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:22.614 13:59:02 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.614 13:59:02 -- common/autotest_common.sh@1194 -- # return 0 00:16:22.614 13:59:02 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:22.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.872 13:59:02 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:22.872 13:59:02 -- common/autotest_common.sh@1205 -- # local i=0 00:16:22.872 13:59:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:22.872 13:59:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.872 13:59:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:22.872 13:59:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:22.872 13:59:02 -- common/autotest_common.sh@1217 -- # return 0 00:16:22.872 13:59:02 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.872 13:59:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.872 13:59:02 -- common/autotest_common.sh@10 -- # set +x 00:16:22.872 13:59:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.872 13:59:02 -- target/rpc.sh@81 -- # seq 1 5 00:16:22.872 13:59:02 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:22.872 13:59:02 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:22.872 13:59:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.872 13:59:02 -- common/autotest_common.sh@10 -- # set +x 00:16:22.872 13:59:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.872 13:59:02 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.872 13:59:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.872 13:59:02 -- common/autotest_common.sh@10 -- # set +x 00:16:22.872 [2024-04-26 13:59:02.361582] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.872 13:59:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.872 13:59:02 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:22.872 13:59:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.872 13:59:02 -- common/autotest_common.sh@10 -- # set +x 00:16:22.872 13:59:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.872 13:59:02 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:22.872 13:59:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.872 13:59:02 -- common/autotest_common.sh@10 -- # set +x 00:16:22.872 13:59:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.872 13:59:02 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:23.132 13:59:02 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:23.132 13:59:02 -- common/autotest_common.sh@1184 -- # local i=0 00:16:23.132 13:59:02 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:23.132 13:59:02 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:23.132 13:59:02 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:25.038 13:59:04 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:25.038 13:59:04 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:25.038 13:59:04 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:25.038 13:59:04 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:25.038 13:59:04 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:25.038 13:59:04 -- common/autotest_common.sh@1194 -- # return 0 00:16:25.038 13:59:04 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:25.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.038 13:59:04 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:25.038 13:59:04 -- common/autotest_common.sh@1205 -- # local i=0 00:16:25.038 13:59:04 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:25.038 13:59:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.038 13:59:04 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:25.038 13:59:04 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.038 13:59:04 -- common/autotest_common.sh@1217 -- # return 0 00:16:25.038 13:59:04 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:25.038 13:59:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.038 13:59:04 -- common/autotest_common.sh@10 -- # set +x 00:16:25.038 13:59:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.038 13:59:04 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.038 13:59:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.038 13:59:04 -- common/autotest_common.sh@10 -- # set +x 00:16:25.038 13:59:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.038 13:59:04 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:25.038 13:59:04 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:25.038 13:59:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.038 13:59:04 -- common/autotest_common.sh@10 -- # set +x 00:16:25.038 13:59:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.038 13:59:04 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:25.038 13:59:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.038 13:59:04 -- common/autotest_common.sh@10 -- # set +x 00:16:25.038 [2024-04-26 13:59:04.703547] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.038 13:59:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.038 13:59:04 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:25.038 13:59:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.038 13:59:04 -- common/autotest_common.sh@10 -- # set +x 00:16:25.298 13:59:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.298 13:59:04 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:25.298 13:59:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:25.298 13:59:04 -- common/autotest_common.sh@10 -- # set +x 00:16:25.298 13:59:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:25.298 13:59:04 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.298 13:59:04 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:25.298 13:59:04 -- common/autotest_common.sh@1184 -- # local i=0 00:16:25.298 13:59:04 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:25.298 13:59:04 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:25.298 13:59:04 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:27.835 13:59:06 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:27.835 13:59:06 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:27.835 13:59:06 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:27.835 13:59:06 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:27.835 13:59:06 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:27.835 13:59:06 -- common/autotest_common.sh@1194 -- # return 0 00:16:27.835 13:59:06 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:27.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.835 13:59:07 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:27.835 13:59:07 -- common/autotest_common.sh@1205 -- # local i=0 00:16:27.835 13:59:07 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:27.835 13:59:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.835 13:59:07 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:27.835 13:59:07 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.835 13:59:07 -- common/autotest_common.sh@1217 -- # return 0 00:16:27.835 13:59:07 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:27.835 13:59:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.835 13:59:07 -- common/autotest_common.sh@10 -- # set +x 00:16:27.835 13:59:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.835 13:59:07 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.835 13:59:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.835 13:59:07 -- common/autotest_common.sh@10 -- # set +x 00:16:27.835 13:59:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.835 13:59:07 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:27.835 13:59:07 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:27.835 13:59:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.835 13:59:07 -- common/autotest_common.sh@10 -- # set +x 00:16:27.835 13:59:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.835 13:59:07 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.835 13:59:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.835 13:59:07 -- common/autotest_common.sh@10 -- # set +x 00:16:27.835 [2024-04-26 13:59:07.144135] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.835 13:59:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.835 13:59:07 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:27.835 13:59:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.835 13:59:07 -- common/autotest_common.sh@10 -- # set +x 00:16:27.835 13:59:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.835 13:59:07 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:27.835 13:59:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.835 13:59:07 -- common/autotest_common.sh@10 -- # set +x 00:16:27.835 13:59:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.835 13:59:07 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:27.835 13:59:07 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:27.835 13:59:07 -- common/autotest_common.sh@1184 -- # local i=0 00:16:27.835 13:59:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:27.835 13:59:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:27.835 13:59:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:29.742 13:59:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:29.742 13:59:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:29.742 13:59:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:29.742 13:59:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:29.742 13:59:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:29.742 13:59:09 -- common/autotest_common.sh@1194 -- # return 0 00:16:29.742 13:59:09 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:30.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.002 13:59:09 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:30.002 13:59:09 -- common/autotest_common.sh@1205 -- # local i=0 00:16:30.002 13:59:09 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:30.002 13:59:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.002 13:59:09 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:30.002 13:59:09 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:30.002 13:59:09 -- common/autotest_common.sh@1217 -- # return 0 00:16:30.002 13:59:09 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:30.002 13:59:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.002 13:59:09 -- common/autotest_common.sh@10 -- # set +x 00:16:30.002 13:59:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.002 13:59:09 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:30.002 13:59:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.002 13:59:09 -- common/autotest_common.sh@10 -- # set +x 00:16:30.002 13:59:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.002 13:59:09 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:30.002 13:59:09 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:30.002 13:59:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.002 13:59:09 -- common/autotest_common.sh@10 -- # set +x 00:16:30.002 13:59:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.002 13:59:09 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:30.002 13:59:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.002 13:59:09 -- common/autotest_common.sh@10 -- # set +x 00:16:30.002 [2024-04-26 13:59:09.593817] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.002 13:59:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.002 13:59:09 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:30.002 13:59:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.002 13:59:09 -- common/autotest_common.sh@10 -- # set +x 00:16:30.002 13:59:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.002 13:59:09 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:30.002 13:59:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.002 13:59:09 -- common/autotest_common.sh@10 -- # set +x 00:16:30.002 13:59:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.002 13:59:09 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:30.260 13:59:09 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:30.260 13:59:09 -- common/autotest_common.sh@1184 -- # local i=0 00:16:30.260 13:59:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:30.260 13:59:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:30.260 13:59:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:32.157 13:59:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:32.157 13:59:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:32.157 13:59:11 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:32.157 13:59:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:32.157 13:59:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:32.157 13:59:11 -- common/autotest_common.sh@1194 -- # return 0 00:16:32.157 13:59:11 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:32.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.415 13:59:11 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:32.415 13:59:11 -- common/autotest_common.sh@1205 -- # local i=0 00:16:32.415 13:59:11 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:32.415 13:59:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.415 13:59:11 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.415 13:59:11 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:32.415 13:59:11 -- common/autotest_common.sh@1217 -- # return 0 00:16:32.415 13:59:11 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:32.415 13:59:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.415 13:59:11 -- common/autotest_common.sh@10 -- # set +x 00:16:32.415 13:59:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.415 13:59:11 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.415 13:59:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.415 13:59:11 -- common/autotest_common.sh@10 -- # set +x 00:16:32.415 13:59:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.415 13:59:11 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:32.415 13:59:11 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:32.415 13:59:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.415 13:59:11 -- common/autotest_common.sh@10 -- # set +x 00:16:32.415 13:59:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.415 13:59:11 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:32.415 13:59:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.415 13:59:11 -- common/autotest_common.sh@10 -- # set +x 00:16:32.415 [2024-04-26 13:59:11.943740] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:32.415 13:59:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.415 13:59:11 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:32.415 13:59:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.415 13:59:11 -- common/autotest_common.sh@10 -- # set +x 00:16:32.415 13:59:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.415 13:59:11 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:32.415 13:59:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.415 13:59:11 -- common/autotest_common.sh@10 -- # set +x 00:16:32.415 13:59:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.416 13:59:11 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:32.674 13:59:12 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:32.674 13:59:12 -- common/autotest_common.sh@1184 -- # local i=0 00:16:32.674 13:59:12 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:32.674 13:59:12 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:32.674 13:59:12 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:34.574 13:59:14 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:34.574 13:59:14 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:34.574 13:59:14 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:34.574 13:59:14 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:34.574 13:59:14 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:34.574 13:59:14 -- common/autotest_common.sh@1194 -- # return 0 00:16:34.574 13:59:14 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:34.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.574 13:59:14 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:34.574 13:59:14 -- common/autotest_common.sh@1205 -- # local i=0 00:16:34.574 13:59:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:34.574 13:59:14 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:34.835 13:59:14 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:34.835 13:59:14 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:34.835 13:59:14 -- common/autotest_common.sh@1217 -- # return 0 00:16:34.835 13:59:14 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@99 -- # seq 1 5 00:16:34.835 13:59:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:34.835 13:59:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 [2024-04-26 13:59:14.310250] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:34.835 13:59:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 [2024-04-26 13:59:14.366270] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:34.835 13:59:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 [2024-04-26 13:59:14.426176] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:34.835 13:59:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 [2024-04-26 13:59:14.490204] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.835 13:59:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:34.835 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.835 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:35.093 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.093 13:59:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.093 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.093 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:35.093 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.093 13:59:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.093 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.093 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:35.093 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.093 13:59:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:35.093 13:59:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:35.093 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.093 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:35.093 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.093 13:59:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.093 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.093 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:35.093 [2024-04-26 13:59:14.554215] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.093 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.094 13:59:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:35.094 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.094 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:35.094 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.094 13:59:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:35.094 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.094 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:35.094 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.094 13:59:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.094 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.094 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:35.094 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.094 13:59:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.094 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.094 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:35.094 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.094 13:59:14 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:35.094 13:59:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.094 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:16:35.094 13:59:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.094 13:59:14 -- target/rpc.sh@110 -- # stats='{ 00:16:35.094 "poll_groups": [ 00:16:35.094 { 00:16:35.094 "admin_qpairs": 2, 00:16:35.094 "completed_nvme_io": 65, 00:16:35.094 "current_admin_qpairs": 0, 00:16:35.094 "current_io_qpairs": 0, 00:16:35.094 "io_qpairs": 16, 00:16:35.094 "name": "nvmf_tgt_poll_group_0", 00:16:35.094 "pending_bdev_io": 0, 00:16:35.094 "transports": [ 00:16:35.094 { 00:16:35.094 "trtype": "TCP" 00:16:35.094 } 00:16:35.094 ] 00:16:35.094 }, 00:16:35.094 { 00:16:35.094 "admin_qpairs": 3, 00:16:35.094 "completed_nvme_io": 118, 00:16:35.094 "current_admin_qpairs": 0, 00:16:35.094 "current_io_qpairs": 0, 00:16:35.094 "io_qpairs": 17, 00:16:35.094 "name": "nvmf_tgt_poll_group_1", 00:16:35.094 "pending_bdev_io": 0, 00:16:35.094 "transports": [ 00:16:35.094 { 00:16:35.094 "trtype": "TCP" 00:16:35.094 } 00:16:35.094 ] 00:16:35.094 }, 00:16:35.094 { 00:16:35.094 "admin_qpairs": 1, 00:16:35.094 "completed_nvme_io": 169, 00:16:35.094 "current_admin_qpairs": 0, 00:16:35.094 "current_io_qpairs": 0, 00:16:35.094 "io_qpairs": 19, 00:16:35.094 "name": "nvmf_tgt_poll_group_2", 00:16:35.094 "pending_bdev_io": 0, 00:16:35.094 "transports": [ 00:16:35.094 { 00:16:35.094 "trtype": "TCP" 00:16:35.094 } 00:16:35.094 ] 00:16:35.094 }, 00:16:35.094 { 00:16:35.094 "admin_qpairs": 1, 00:16:35.094 "completed_nvme_io": 68, 00:16:35.094 "current_admin_qpairs": 0, 00:16:35.094 "current_io_qpairs": 0, 00:16:35.094 "io_qpairs": 18, 00:16:35.094 "name": "nvmf_tgt_poll_group_3", 00:16:35.094 "pending_bdev_io": 0, 00:16:35.094 "transports": [ 00:16:35.094 { 00:16:35.094 "trtype": "TCP" 00:16:35.094 } 00:16:35.094 ] 00:16:35.094 } 00:16:35.094 ], 00:16:35.094 "tick_rate": 2490000000 00:16:35.094 }' 00:16:35.094 13:59:14 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:35.094 13:59:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:35.094 13:59:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:35.094 13:59:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:35.094 13:59:14 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:35.094 13:59:14 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:35.094 13:59:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:35.094 13:59:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:35.094 13:59:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:35.094 13:59:14 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:16:35.094 13:59:14 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:35.094 13:59:14 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:35.094 13:59:14 -- target/rpc.sh@123 -- # nvmftestfini 00:16:35.094 13:59:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:35.094 13:59:14 -- nvmf/common.sh@117 -- # sync 00:16:35.353 13:59:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:35.353 13:59:14 -- nvmf/common.sh@120 -- # set +e 00:16:35.353 13:59:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:35.353 13:59:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:35.353 rmmod nvme_tcp 00:16:35.353 rmmod nvme_fabrics 00:16:35.353 rmmod nvme_keyring 00:16:35.353 13:59:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:35.353 13:59:14 -- nvmf/common.sh@124 -- # set -e 00:16:35.353 13:59:14 -- nvmf/common.sh@125 -- # return 0 00:16:35.353 13:59:14 -- nvmf/common.sh@478 -- # '[' -n 68646 ']' 00:16:35.353 13:59:14 -- nvmf/common.sh@479 -- # killprocess 68646 00:16:35.353 13:59:14 -- common/autotest_common.sh@936 -- # '[' -z 68646 ']' 00:16:35.353 13:59:14 -- common/autotest_common.sh@940 -- # kill -0 68646 00:16:35.353 13:59:14 -- common/autotest_common.sh@941 -- # uname 00:16:35.353 13:59:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:35.353 13:59:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68646 00:16:35.353 13:59:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:35.353 killing process with pid 68646 00:16:35.353 13:59:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:35.353 13:59:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68646' 00:16:35.353 13:59:14 -- common/autotest_common.sh@955 -- # kill 68646 00:16:35.353 13:59:14 -- common/autotest_common.sh@960 -- # wait 68646 00:16:37.254 13:59:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:37.254 13:59:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:37.254 13:59:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:37.254 13:59:16 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:37.254 13:59:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:37.254 13:59:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.255 13:59:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.255 13:59:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.255 13:59:16 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:37.255 00:16:37.255 real 0m21.035s 00:16:37.255 user 1m15.857s 00:16:37.255 sys 0m3.779s 00:16:37.255 13:59:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:37.255 ************************************ 00:16:37.255 END TEST nvmf_rpc 00:16:37.255 ************************************ 00:16:37.255 13:59:16 -- common/autotest_common.sh@10 -- # set +x 00:16:37.255 13:59:16 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:37.255 13:59:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:37.255 13:59:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:37.255 13:59:16 -- common/autotest_common.sh@10 -- # set +x 00:16:37.255 ************************************ 00:16:37.255 START TEST nvmf_invalid 00:16:37.255 ************************************ 00:16:37.255 13:59:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:37.255 * Looking for test storage... 00:16:37.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:37.255 13:59:16 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:37.255 13:59:16 -- nvmf/common.sh@7 -- # uname -s 00:16:37.255 13:59:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.255 13:59:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.255 13:59:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.255 13:59:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.255 13:59:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.255 13:59:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.255 13:59:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.255 13:59:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.255 13:59:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.255 13:59:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.255 13:59:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:16:37.255 13:59:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:16:37.255 13:59:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.255 13:59:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.255 13:59:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:37.255 13:59:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.255 13:59:16 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:37.255 13:59:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.255 13:59:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.255 13:59:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.255 13:59:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.255 13:59:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.255 13:59:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.255 13:59:16 -- paths/export.sh@5 -- # export PATH 00:16:37.255 13:59:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.255 13:59:16 -- nvmf/common.sh@47 -- # : 0 00:16:37.255 13:59:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:37.255 13:59:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:37.255 13:59:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.255 13:59:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.255 13:59:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.255 13:59:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:37.255 13:59:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:37.255 13:59:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:37.255 13:59:16 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:16:37.255 13:59:16 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:37.255 13:59:16 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:37.255 13:59:16 -- target/invalid.sh@14 -- # target=foobar 00:16:37.255 13:59:16 -- target/invalid.sh@16 -- # RANDOM=0 00:16:37.255 13:59:16 -- target/invalid.sh@34 -- # nvmftestinit 00:16:37.255 13:59:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:37.255 13:59:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.255 13:59:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:37.255 13:59:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:37.255 13:59:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:37.255 13:59:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.255 13:59:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.255 13:59:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.255 13:59:16 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:37.255 13:59:16 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:37.255 13:59:16 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:37.255 13:59:16 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:37.255 13:59:16 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:37.255 13:59:16 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:37.255 13:59:16 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:37.255 13:59:16 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:37.255 13:59:16 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:37.255 13:59:16 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:37.255 13:59:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:37.255 13:59:16 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:37.255 13:59:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:37.255 13:59:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:37.255 13:59:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:37.255 13:59:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:37.255 13:59:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:37.255 13:59:16 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:37.255 13:59:16 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:37.255 13:59:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:37.255 Cannot find device "nvmf_tgt_br" 00:16:37.255 13:59:16 -- nvmf/common.sh@155 -- # true 00:16:37.255 13:59:16 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:37.515 Cannot find device "nvmf_tgt_br2" 00:16:37.515 13:59:16 -- nvmf/common.sh@156 -- # true 00:16:37.515 13:59:16 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:37.515 13:59:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:37.515 Cannot find device "nvmf_tgt_br" 00:16:37.515 13:59:16 -- nvmf/common.sh@158 -- # true 00:16:37.515 13:59:16 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:37.515 Cannot find device "nvmf_tgt_br2" 00:16:37.515 13:59:16 -- nvmf/common.sh@159 -- # true 00:16:37.515 13:59:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:37.515 13:59:17 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:37.515 13:59:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:37.515 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:37.515 13:59:17 -- nvmf/common.sh@162 -- # true 00:16:37.515 13:59:17 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:37.515 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:37.515 13:59:17 -- nvmf/common.sh@163 -- # true 00:16:37.515 13:59:17 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:37.515 13:59:17 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:37.515 13:59:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:37.515 13:59:17 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:37.515 13:59:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:37.515 13:59:17 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:37.515 13:59:17 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:37.515 13:59:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:37.515 13:59:17 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:37.515 13:59:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:37.515 13:59:17 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:37.515 13:59:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:37.515 13:59:17 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:37.515 13:59:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:37.515 13:59:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:37.515 13:59:17 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:37.774 13:59:17 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:37.774 13:59:17 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:37.774 13:59:17 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:37.774 13:59:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:37.774 13:59:17 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:37.774 13:59:17 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:37.774 13:59:17 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:37.774 13:59:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:37.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:37.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:16:37.774 00:16:37.774 --- 10.0.0.2 ping statistics --- 00:16:37.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.774 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:16:37.774 13:59:17 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:37.774 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:37.774 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:16:37.774 00:16:37.774 --- 10.0.0.3 ping statistics --- 00:16:37.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.774 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:16:37.774 13:59:17 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:37.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:37.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:16:37.775 00:16:37.775 --- 10.0.0.1 ping statistics --- 00:16:37.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.775 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:37.775 13:59:17 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:37.775 13:59:17 -- nvmf/common.sh@422 -- # return 0 00:16:37.775 13:59:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:37.775 13:59:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:37.775 13:59:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:37.775 13:59:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:37.775 13:59:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:37.775 13:59:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:37.775 13:59:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:37.775 13:59:17 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:37.775 13:59:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:37.775 13:59:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:37.775 13:59:17 -- common/autotest_common.sh@10 -- # set +x 00:16:37.775 13:59:17 -- nvmf/common.sh@470 -- # nvmfpid=69181 00:16:37.775 13:59:17 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:37.775 13:59:17 -- nvmf/common.sh@471 -- # waitforlisten 69181 00:16:37.775 13:59:17 -- common/autotest_common.sh@817 -- # '[' -z 69181 ']' 00:16:37.775 13:59:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.775 13:59:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:37.775 13:59:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.775 13:59:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:37.775 13:59:17 -- common/autotest_common.sh@10 -- # set +x 00:16:37.775 [2024-04-26 13:59:17.412982] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:37.775 [2024-04-26 13:59:17.413090] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:38.049 [2024-04-26 13:59:17.587520] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:38.321 [2024-04-26 13:59:17.827098] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:38.321 [2024-04-26 13:59:17.827163] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:38.321 [2024-04-26 13:59:17.827180] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:38.321 [2024-04-26 13:59:17.827191] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:38.321 [2024-04-26 13:59:17.827204] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:38.321 [2024-04-26 13:59:17.827467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:38.321 [2024-04-26 13:59:17.827598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:38.321 [2024-04-26 13:59:17.828868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.321 [2024-04-26 13:59:17.828896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:38.580 13:59:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:38.580 13:59:18 -- common/autotest_common.sh@850 -- # return 0 00:16:38.580 13:59:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:38.580 13:59:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:38.580 13:59:18 -- common/autotest_common.sh@10 -- # set +x 00:16:38.837 13:59:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:38.837 13:59:18 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:38.837 13:59:18 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode16574 00:16:38.837 [2024-04-26 13:59:18.472748] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:38.837 13:59:18 -- target/invalid.sh@40 -- # out='2024/04/26 13:59:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode16574 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:16:38.837 request: 00:16:38.837 { 00:16:38.837 "method": "nvmf_create_subsystem", 00:16:38.837 "params": { 00:16:38.837 "nqn": "nqn.2016-06.io.spdk:cnode16574", 00:16:38.837 "tgt_name": "foobar" 00:16:38.837 } 00:16:38.837 } 00:16:38.837 Got JSON-RPC error response 00:16:38.837 GoRPCClient: error on JSON-RPC call' 00:16:38.837 13:59:18 -- target/invalid.sh@41 -- # [[ 2024/04/26 13:59:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode16574 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:16:38.837 request: 00:16:38.837 { 00:16:38.837 "method": "nvmf_create_subsystem", 00:16:38.837 "params": { 00:16:38.837 "nqn": "nqn.2016-06.io.spdk:cnode16574", 00:16:38.837 "tgt_name": "foobar" 00:16:38.837 } 00:16:38.837 } 00:16:38.837 Got JSON-RPC error response 00:16:38.837 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:38.837 13:59:18 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:38.837 13:59:18 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode4090 00:16:39.095 [2024-04-26 13:59:18.684629] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4090: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:39.095 13:59:18 -- target/invalid.sh@45 -- # out='2024/04/26 13:59:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode4090 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:16:39.095 request: 00:16:39.095 { 00:16:39.095 "method": "nvmf_create_subsystem", 00:16:39.095 "params": { 00:16:39.095 "nqn": "nqn.2016-06.io.spdk:cnode4090", 00:16:39.095 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:16:39.095 } 00:16:39.095 } 00:16:39.095 Got JSON-RPC error response 00:16:39.095 GoRPCClient: error on JSON-RPC call' 00:16:39.095 13:59:18 -- target/invalid.sh@46 -- # [[ 2024/04/26 13:59:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode4090 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:16:39.095 request: 00:16:39.095 { 00:16:39.095 "method": "nvmf_create_subsystem", 00:16:39.095 "params": { 00:16:39.095 "nqn": "nqn.2016-06.io.spdk:cnode4090", 00:16:39.095 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:16:39.095 } 00:16:39.095 } 00:16:39.095 Got JSON-RPC error response 00:16:39.095 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:39.095 13:59:18 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:39.095 13:59:18 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10535 00:16:39.354 [2024-04-26 13:59:18.892536] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10535: invalid model number 'SPDK_Controller' 00:16:39.354 13:59:18 -- target/invalid.sh@50 -- # out='2024/04/26 13:59:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode10535], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:16:39.354 request: 00:16:39.354 { 00:16:39.354 "method": "nvmf_create_subsystem", 00:16:39.354 "params": { 00:16:39.354 "nqn": "nqn.2016-06.io.spdk:cnode10535", 00:16:39.354 "model_number": "SPDK_Controller\u001f" 00:16:39.354 } 00:16:39.354 } 00:16:39.354 Got JSON-RPC error response 00:16:39.354 GoRPCClient: error on JSON-RPC call' 00:16:39.354 13:59:18 -- target/invalid.sh@51 -- # [[ 2024/04/26 13:59:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode10535], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:16:39.354 request: 00:16:39.354 { 00:16:39.354 "method": "nvmf_create_subsystem", 00:16:39.354 "params": { 00:16:39.354 "nqn": "nqn.2016-06.io.spdk:cnode10535", 00:16:39.354 "model_number": "SPDK_Controller\u001f" 00:16:39.354 } 00:16:39.354 } 00:16:39.354 Got JSON-RPC error response 00:16:39.354 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:39.354 13:59:18 -- target/invalid.sh@54 -- # gen_random_s 21 00:16:39.354 13:59:18 -- target/invalid.sh@19 -- # local length=21 ll 00:16:39.354 13:59:18 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:39.354 13:59:18 -- target/invalid.sh@21 -- # local chars 00:16:39.354 13:59:18 -- target/invalid.sh@22 -- # local string 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # printf %x 104 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # echo -e '\x68' 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # string+=h 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # printf %x 48 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # string+=0 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # printf %x 59 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # string+=';' 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # printf %x 87 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # echo -e '\x57' 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # string+=W 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # printf %x 78 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # string+=N 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # printf %x 80 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # string+=P 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # printf %x 80 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # string+=P 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # printf %x 122 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # string+=z 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # printf %x 56 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # string+=8 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # printf %x 101 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # string+=e 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # printf %x 34 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # echo -e '\x22' 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # string+='"' 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # printf %x 45 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:16:39.354 13:59:18 -- target/invalid.sh@25 -- # string+=- 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.354 13:59:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.354 13:59:19 -- target/invalid.sh@25 -- # printf %x 108 00:16:39.354 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:39.354 13:59:19 -- target/invalid.sh@25 -- # string+=l 00:16:39.354 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.355 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.355 13:59:19 -- target/invalid.sh@25 -- # printf %x 101 00:16:39.355 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:39.355 13:59:19 -- target/invalid.sh@25 -- # string+=e 00:16:39.355 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.355 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.355 13:59:19 -- target/invalid.sh@25 -- # printf %x 84 00:16:39.355 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x54' 00:16:39.355 13:59:19 -- target/invalid.sh@25 -- # string+=T 00:16:39.355 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.355 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.355 13:59:19 -- target/invalid.sh@25 -- # printf %x 126 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # string+='~' 00:16:39.614 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.614 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # printf %x 35 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # string+='#' 00:16:39.614 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.614 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # printf %x 88 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x58' 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # string+=X 00:16:39.614 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.614 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # printf %x 122 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # string+=z 00:16:39.614 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.614 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # printf %x 52 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # string+=4 00:16:39.614 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.614 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # printf %x 75 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # string+=K 00:16:39.614 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.614 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.614 13:59:19 -- target/invalid.sh@28 -- # [[ h == \- ]] 00:16:39.614 13:59:19 -- target/invalid.sh@31 -- # echo 'h0;WNPPz8e"-leT~#Xz4K' 00:16:39.614 13:59:19 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'h0;WNPPz8e"-leT~#Xz4K' nqn.2016-06.io.spdk:cnode1583 00:16:39.614 [2024-04-26 13:59:19.244373] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1583: invalid serial number 'h0;WNPPz8e"-leT~#Xz4K' 00:16:39.614 13:59:19 -- target/invalid.sh@54 -- # out='2024/04/26 13:59:19 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode1583 serial_number:h0;WNPPz8e"-leT~#Xz4K], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN h0;WNPPz8e"-leT~#Xz4K 00:16:39.614 request: 00:16:39.614 { 00:16:39.614 "method": "nvmf_create_subsystem", 00:16:39.614 "params": { 00:16:39.614 "nqn": "nqn.2016-06.io.spdk:cnode1583", 00:16:39.614 "serial_number": "h0;WNPPz8e\"-leT~#Xz4K" 00:16:39.614 } 00:16:39.614 } 00:16:39.614 Got JSON-RPC error response 00:16:39.614 GoRPCClient: error on JSON-RPC call' 00:16:39.614 13:59:19 -- target/invalid.sh@55 -- # [[ 2024/04/26 13:59:19 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode1583 serial_number:h0;WNPPz8e"-leT~#Xz4K], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN h0;WNPPz8e"-leT~#Xz4K 00:16:39.614 request: 00:16:39.614 { 00:16:39.614 "method": "nvmf_create_subsystem", 00:16:39.614 "params": { 00:16:39.614 "nqn": "nqn.2016-06.io.spdk:cnode1583", 00:16:39.614 "serial_number": "h0;WNPPz8e\"-leT~#Xz4K" 00:16:39.614 } 00:16:39.614 } 00:16:39.614 Got JSON-RPC error response 00:16:39.614 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:39.614 13:59:19 -- target/invalid.sh@58 -- # gen_random_s 41 00:16:39.614 13:59:19 -- target/invalid.sh@19 -- # local length=41 ll 00:16:39.614 13:59:19 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:39.614 13:59:19 -- target/invalid.sh@21 -- # local chars 00:16:39.614 13:59:19 -- target/invalid.sh@22 -- # local string 00:16:39.614 13:59:19 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:39.614 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # printf %x 49 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # string+=1 00:16:39.614 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.614 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # printf %x 117 00:16:39.614 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+=u 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 126 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+='~' 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 77 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+=M 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 108 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+=l 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 56 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+=8 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 95 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+=_ 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 53 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+=5 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 92 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+='\' 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 40 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x28' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+='(' 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 94 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+='^' 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 69 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x45' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+=E 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 65 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+=A 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 107 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+=k 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 120 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+=x 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 48 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x30' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+=0 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 111 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+=o 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 120 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+=x 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 36 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+='$' 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 62 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+='>' 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 107 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+=k 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 109 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+=m 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 83 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x53' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+=S 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 102 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x66' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+=f 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.874 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # printf %x 103 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x67' 00:16:39.874 13:59:19 -- target/invalid.sh@25 -- # string+=g 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # printf %x 122 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # string+=z 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # printf %x 101 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # string+=e 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # printf %x 74 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # string+=J 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # printf %x 80 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # string+=P 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # printf %x 82 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # string+=R 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # printf %x 97 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # string+=a 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # printf %x 79 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # string+=O 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # printf %x 52 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x34' 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # string+=4 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # printf %x 90 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # string+=Z 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # printf %x 36 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x24' 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # string+='$' 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # printf %x 76 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # string+=L 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # printf %x 66 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # string+=B 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # printf %x 108 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:16:39.875 13:59:19 -- target/invalid.sh@25 -- # string+=l 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:39.875 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.134 13:59:19 -- target/invalid.sh@25 -- # printf %x 107 00:16:40.134 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:16:40.134 13:59:19 -- target/invalid.sh@25 -- # string+=k 00:16:40.134 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.134 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.134 13:59:19 -- target/invalid.sh@25 -- # printf %x 122 00:16:40.134 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:40.134 13:59:19 -- target/invalid.sh@25 -- # string+=z 00:16:40.134 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.134 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.134 13:59:19 -- target/invalid.sh@25 -- # printf %x 33 00:16:40.134 13:59:19 -- target/invalid.sh@25 -- # echo -e '\x21' 00:16:40.134 13:59:19 -- target/invalid.sh@25 -- # string+='!' 00:16:40.134 13:59:19 -- target/invalid.sh@24 -- # (( ll++ )) 00:16:40.134 13:59:19 -- target/invalid.sh@24 -- # (( ll < length )) 00:16:40.134 13:59:19 -- target/invalid.sh@28 -- # [[ 1 == \- ]] 00:16:40.134 13:59:19 -- target/invalid.sh@31 -- # echo '1u~Ml8_5\(^EAkx0ox$>kmSfgzeJPRaO4Z$LBlkz!' 00:16:40.134 13:59:19 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '1u~Ml8_5\(^EAkx0ox$>kmSfgzeJPRaO4Z$LBlkz!' nqn.2016-06.io.spdk:cnode7731 00:16:40.134 [2024-04-26 13:59:19.752186] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7731: invalid model number '1u~Ml8_5\(^EAkx0ox$>kmSfgzeJPRaO4Z$LBlkz!' 00:16:40.134 13:59:19 -- target/invalid.sh@58 -- # out='2024/04/26 13:59:19 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:1u~Ml8_5\(^EAkx0ox$>kmSfgzeJPRaO4Z$LBlkz! nqn:nqn.2016-06.io.spdk:cnode7731], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 1u~Ml8_5\(^EAkx0ox$>kmSfgzeJPRaO4Z$LBlkz! 00:16:40.134 request: 00:16:40.134 { 00:16:40.134 "method": "nvmf_create_subsystem", 00:16:40.134 "params": { 00:16:40.134 "nqn": "nqn.2016-06.io.spdk:cnode7731", 00:16:40.134 "model_number": "1u~Ml8_5\\(^EAkx0ox$>kmSfgzeJPRaO4Z$LBlkz!" 00:16:40.134 } 00:16:40.134 } 00:16:40.134 Got JSON-RPC error response 00:16:40.134 GoRPCClient: error on JSON-RPC call' 00:16:40.134 13:59:19 -- target/invalid.sh@59 -- # [[ 2024/04/26 13:59:19 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:1u~Ml8_5\(^EAkx0ox$>kmSfgzeJPRaO4Z$LBlkz! nqn:nqn.2016-06.io.spdk:cnode7731], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 1u~Ml8_5\(^EAkx0ox$>kmSfgzeJPRaO4Z$LBlkz! 00:16:40.134 request: 00:16:40.134 { 00:16:40.134 "method": "nvmf_create_subsystem", 00:16:40.134 "params": { 00:16:40.134 "nqn": "nqn.2016-06.io.spdk:cnode7731", 00:16:40.134 "model_number": "1u~Ml8_5\\(^EAkx0ox$>kmSfgzeJPRaO4Z$LBlkz!" 00:16:40.134 } 00:16:40.134 } 00:16:40.134 Got JSON-RPC error response 00:16:40.134 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:40.134 13:59:19 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:40.393 [2024-04-26 13:59:19.948083] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.393 13:59:19 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:40.651 13:59:20 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:40.651 13:59:20 -- target/invalid.sh@67 -- # head -n 1 00:16:40.651 13:59:20 -- target/invalid.sh@67 -- # echo '' 00:16:40.651 13:59:20 -- target/invalid.sh@67 -- # IP= 00:16:40.651 13:59:20 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:40.910 [2024-04-26 13:59:20.441584] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:40.911 13:59:20 -- target/invalid.sh@69 -- # out='2024/04/26 13:59:20 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:16:40.911 request: 00:16:40.911 { 00:16:40.911 "method": "nvmf_subsystem_remove_listener", 00:16:40.911 "params": { 00:16:40.911 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:40.911 "listen_address": { 00:16:40.911 "trtype": "tcp", 00:16:40.911 "traddr": "", 00:16:40.911 "trsvcid": "4421" 00:16:40.911 } 00:16:40.911 } 00:16:40.911 } 00:16:40.911 Got JSON-RPC error response 00:16:40.911 GoRPCClient: error on JSON-RPC call' 00:16:40.911 13:59:20 -- target/invalid.sh@70 -- # [[ 2024/04/26 13:59:20 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:16:40.911 request: 00:16:40.911 { 00:16:40.911 "method": "nvmf_subsystem_remove_listener", 00:16:40.911 "params": { 00:16:40.911 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:40.911 "listen_address": { 00:16:40.911 "trtype": "tcp", 00:16:40.911 "traddr": "", 00:16:40.911 "trsvcid": "4421" 00:16:40.911 } 00:16:40.911 } 00:16:40.911 } 00:16:40.911 Got JSON-RPC error response 00:16:40.911 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:40.911 13:59:20 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12769 -i 0 00:16:41.169 [2024-04-26 13:59:20.645542] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12769: invalid cntlid range [0-65519] 00:16:41.169 13:59:20 -- target/invalid.sh@73 -- # out='2024/04/26 13:59:20 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode12769], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:16:41.169 request: 00:16:41.169 { 00:16:41.169 "method": "nvmf_create_subsystem", 00:16:41.169 "params": { 00:16:41.169 "nqn": "nqn.2016-06.io.spdk:cnode12769", 00:16:41.169 "min_cntlid": 0 00:16:41.169 } 00:16:41.169 } 00:16:41.169 Got JSON-RPC error response 00:16:41.170 GoRPCClient: error on JSON-RPC call' 00:16:41.170 13:59:20 -- target/invalid.sh@74 -- # [[ 2024/04/26 13:59:20 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode12769], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:16:41.170 request: 00:16:41.170 { 00:16:41.170 "method": "nvmf_create_subsystem", 00:16:41.170 "params": { 00:16:41.170 "nqn": "nqn.2016-06.io.spdk:cnode12769", 00:16:41.170 "min_cntlid": 0 00:16:41.170 } 00:16:41.170 } 00:16:41.170 Got JSON-RPC error response 00:16:41.170 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:41.170 13:59:20 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20816 -i 65520 00:16:41.427 [2024-04-26 13:59:20.909480] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20816: invalid cntlid range [65520-65519] 00:16:41.428 13:59:20 -- target/invalid.sh@75 -- # out='2024/04/26 13:59:20 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode20816], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:16:41.428 request: 00:16:41.428 { 00:16:41.428 "method": "nvmf_create_subsystem", 00:16:41.428 "params": { 00:16:41.428 "nqn": "nqn.2016-06.io.spdk:cnode20816", 00:16:41.428 "min_cntlid": 65520 00:16:41.428 } 00:16:41.428 } 00:16:41.428 Got JSON-RPC error response 00:16:41.428 GoRPCClient: error on JSON-RPC call' 00:16:41.428 13:59:20 -- target/invalid.sh@76 -- # [[ 2024/04/26 13:59:20 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode20816], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:16:41.428 request: 00:16:41.428 { 00:16:41.428 "method": "nvmf_create_subsystem", 00:16:41.428 "params": { 00:16:41.428 "nqn": "nqn.2016-06.io.spdk:cnode20816", 00:16:41.428 "min_cntlid": 65520 00:16:41.428 } 00:16:41.428 } 00:16:41.428 Got JSON-RPC error response 00:16:41.428 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:41.428 13:59:20 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24918 -I 0 00:16:41.686 [2024-04-26 13:59:21.121495] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24918: invalid cntlid range [1-0] 00:16:41.686 13:59:21 -- target/invalid.sh@77 -- # out='2024/04/26 13:59:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode24918], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:16:41.686 request: 00:16:41.686 { 00:16:41.686 "method": "nvmf_create_subsystem", 00:16:41.686 "params": { 00:16:41.686 "nqn": "nqn.2016-06.io.spdk:cnode24918", 00:16:41.686 "max_cntlid": 0 00:16:41.686 } 00:16:41.686 } 00:16:41.686 Got JSON-RPC error response 00:16:41.686 GoRPCClient: error on JSON-RPC call' 00:16:41.686 13:59:21 -- target/invalid.sh@78 -- # [[ 2024/04/26 13:59:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode24918], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:16:41.686 request: 00:16:41.686 { 00:16:41.686 "method": "nvmf_create_subsystem", 00:16:41.686 "params": { 00:16:41.686 "nqn": "nqn.2016-06.io.spdk:cnode24918", 00:16:41.686 "max_cntlid": 0 00:16:41.686 } 00:16:41.686 } 00:16:41.686 Got JSON-RPC error response 00:16:41.686 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:41.686 13:59:21 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2728 -I 65520 00:16:41.686 [2024-04-26 13:59:21.349546] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2728: invalid cntlid range [1-65520] 00:16:41.945 13:59:21 -- target/invalid.sh@79 -- # out='2024/04/26 13:59:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode2728], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:16:41.945 request: 00:16:41.945 { 00:16:41.945 "method": "nvmf_create_subsystem", 00:16:41.945 "params": { 00:16:41.945 "nqn": "nqn.2016-06.io.spdk:cnode2728", 00:16:41.945 "max_cntlid": 65520 00:16:41.945 } 00:16:41.945 } 00:16:41.945 Got JSON-RPC error response 00:16:41.945 GoRPCClient: error on JSON-RPC call' 00:16:41.945 13:59:21 -- target/invalid.sh@80 -- # [[ 2024/04/26 13:59:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode2728], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:16:41.945 request: 00:16:41.945 { 00:16:41.945 "method": "nvmf_create_subsystem", 00:16:41.945 "params": { 00:16:41.945 "nqn": "nqn.2016-06.io.spdk:cnode2728", 00:16:41.945 "max_cntlid": 65520 00:16:41.945 } 00:16:41.945 } 00:16:41.945 Got JSON-RPC error response 00:16:41.945 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:41.945 13:59:21 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7968 -i 6 -I 5 00:16:41.945 [2024-04-26 13:59:21.549711] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7968: invalid cntlid range [6-5] 00:16:41.945 13:59:21 -- target/invalid.sh@83 -- # out='2024/04/26 13:59:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode7968], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:16:41.945 request: 00:16:41.945 { 00:16:41.945 "method": "nvmf_create_subsystem", 00:16:41.945 "params": { 00:16:41.945 "nqn": "nqn.2016-06.io.spdk:cnode7968", 00:16:41.945 "min_cntlid": 6, 00:16:41.945 "max_cntlid": 5 00:16:41.945 } 00:16:41.945 } 00:16:41.945 Got JSON-RPC error response 00:16:41.945 GoRPCClient: error on JSON-RPC call' 00:16:41.945 13:59:21 -- target/invalid.sh@84 -- # [[ 2024/04/26 13:59:21 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode7968], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:16:41.945 request: 00:16:41.945 { 00:16:41.945 "method": "nvmf_create_subsystem", 00:16:41.945 "params": { 00:16:41.945 "nqn": "nqn.2016-06.io.spdk:cnode7968", 00:16:41.945 "min_cntlid": 6, 00:16:41.945 "max_cntlid": 5 00:16:41.945 } 00:16:41.945 } 00:16:41.945 Got JSON-RPC error response 00:16:41.945 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:41.945 13:59:21 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:42.203 13:59:21 -- target/invalid.sh@87 -- # out='request: 00:16:42.203 { 00:16:42.203 "name": "foobar", 00:16:42.203 "method": "nvmf_delete_target", 00:16:42.203 "req_id": 1 00:16:42.203 } 00:16:42.203 Got JSON-RPC error response 00:16:42.203 response: 00:16:42.203 { 00:16:42.203 "code": -32602, 00:16:42.203 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:42.203 }' 00:16:42.203 13:59:21 -- target/invalid.sh@88 -- # [[ request: 00:16:42.203 { 00:16:42.203 "name": "foobar", 00:16:42.203 "method": "nvmf_delete_target", 00:16:42.203 "req_id": 1 00:16:42.203 } 00:16:42.203 Got JSON-RPC error response 00:16:42.203 response: 00:16:42.203 { 00:16:42.203 "code": -32602, 00:16:42.203 "message": "The specified target doesn't exist, cannot delete it." 00:16:42.203 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:42.203 13:59:21 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:42.203 13:59:21 -- target/invalid.sh@91 -- # nvmftestfini 00:16:42.203 13:59:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:42.203 13:59:21 -- nvmf/common.sh@117 -- # sync 00:16:42.203 13:59:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:42.203 13:59:21 -- nvmf/common.sh@120 -- # set +e 00:16:42.203 13:59:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:42.203 13:59:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:42.203 rmmod nvme_tcp 00:16:42.203 rmmod nvme_fabrics 00:16:42.203 rmmod nvme_keyring 00:16:42.203 13:59:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:42.203 13:59:21 -- nvmf/common.sh@124 -- # set -e 00:16:42.203 13:59:21 -- nvmf/common.sh@125 -- # return 0 00:16:42.203 13:59:21 -- nvmf/common.sh@478 -- # '[' -n 69181 ']' 00:16:42.203 13:59:21 -- nvmf/common.sh@479 -- # killprocess 69181 00:16:42.203 13:59:21 -- common/autotest_common.sh@936 -- # '[' -z 69181 ']' 00:16:42.203 13:59:21 -- common/autotest_common.sh@940 -- # kill -0 69181 00:16:42.203 13:59:21 -- common/autotest_common.sh@941 -- # uname 00:16:42.203 13:59:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.203 13:59:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69181 00:16:42.203 13:59:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:42.203 13:59:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:42.203 killing process with pid 69181 00:16:42.203 13:59:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69181' 00:16:42.203 13:59:21 -- common/autotest_common.sh@955 -- # kill 69181 00:16:42.203 13:59:21 -- common/autotest_common.sh@960 -- # wait 69181 00:16:43.577 13:59:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:43.577 13:59:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:43.577 13:59:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:43.577 13:59:23 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.577 13:59:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.577 13:59:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.577 13:59:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.577 13:59:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.577 13:59:23 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:43.577 00:16:43.577 real 0m6.536s 00:16:43.577 user 0m22.737s 00:16:43.577 sys 0m1.708s 00:16:43.577 13:59:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:43.577 13:59:23 -- common/autotest_common.sh@10 -- # set +x 00:16:43.577 ************************************ 00:16:43.577 END TEST nvmf_invalid 00:16:43.577 ************************************ 00:16:43.836 13:59:23 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:16:43.836 13:59:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:43.836 13:59:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:43.836 13:59:23 -- common/autotest_common.sh@10 -- # set +x 00:16:43.836 ************************************ 00:16:43.836 START TEST nvmf_abort 00:16:43.836 ************************************ 00:16:43.836 13:59:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:16:43.836 * Looking for test storage... 00:16:43.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:43.836 13:59:23 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:43.836 13:59:23 -- nvmf/common.sh@7 -- # uname -s 00:16:43.836 13:59:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.836 13:59:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.836 13:59:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.836 13:59:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.836 13:59:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.836 13:59:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.836 13:59:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.836 13:59:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.836 13:59:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.095 13:59:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.095 13:59:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:16:44.095 13:59:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:16:44.095 13:59:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.095 13:59:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.095 13:59:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:44.095 13:59:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.095 13:59:23 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:44.095 13:59:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.095 13:59:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.095 13:59:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.095 13:59:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.095 13:59:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.095 13:59:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.095 13:59:23 -- paths/export.sh@5 -- # export PATH 00:16:44.095 13:59:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.095 13:59:23 -- nvmf/common.sh@47 -- # : 0 00:16:44.095 13:59:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:44.095 13:59:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:44.095 13:59:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.095 13:59:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.096 13:59:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.096 13:59:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:44.096 13:59:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:44.096 13:59:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:44.096 13:59:23 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:44.096 13:59:23 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:16:44.096 13:59:23 -- target/abort.sh@14 -- # nvmftestinit 00:16:44.096 13:59:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:44.096 13:59:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.096 13:59:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:44.096 13:59:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:44.096 13:59:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:44.096 13:59:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.096 13:59:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:44.096 13:59:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.096 13:59:23 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:44.096 13:59:23 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:44.096 13:59:23 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:44.096 13:59:23 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:44.096 13:59:23 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:44.096 13:59:23 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:44.096 13:59:23 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.096 13:59:23 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.096 13:59:23 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:44.096 13:59:23 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:44.096 13:59:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:44.096 13:59:23 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:44.096 13:59:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:44.096 13:59:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.096 13:59:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:44.096 13:59:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:44.096 13:59:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:44.096 13:59:23 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:44.096 13:59:23 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:44.096 13:59:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:44.096 Cannot find device "nvmf_tgt_br" 00:16:44.096 13:59:23 -- nvmf/common.sh@155 -- # true 00:16:44.096 13:59:23 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:44.096 Cannot find device "nvmf_tgt_br2" 00:16:44.096 13:59:23 -- nvmf/common.sh@156 -- # true 00:16:44.096 13:59:23 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:44.096 13:59:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:44.096 Cannot find device "nvmf_tgt_br" 00:16:44.096 13:59:23 -- nvmf/common.sh@158 -- # true 00:16:44.096 13:59:23 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:44.096 Cannot find device "nvmf_tgt_br2" 00:16:44.096 13:59:23 -- nvmf/common.sh@159 -- # true 00:16:44.096 13:59:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:44.096 13:59:23 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:44.096 13:59:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:44.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.096 13:59:23 -- nvmf/common.sh@162 -- # true 00:16:44.096 13:59:23 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:44.096 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:44.096 13:59:23 -- nvmf/common.sh@163 -- # true 00:16:44.096 13:59:23 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:44.096 13:59:23 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:44.096 13:59:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:44.096 13:59:23 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:44.096 13:59:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:44.355 13:59:23 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:44.355 13:59:23 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:44.355 13:59:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:44.355 13:59:23 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:44.355 13:59:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:44.355 13:59:23 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:44.355 13:59:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:44.355 13:59:23 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:44.355 13:59:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:44.355 13:59:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:44.355 13:59:23 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:44.355 13:59:23 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:44.355 13:59:23 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:44.355 13:59:23 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:44.355 13:59:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:44.355 13:59:23 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:44.355 13:59:23 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:44.355 13:59:23 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:44.355 13:59:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:44.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:16:44.355 00:16:44.355 --- 10.0.0.2 ping statistics --- 00:16:44.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.355 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:16:44.355 13:59:23 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:44.355 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:44.355 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:16:44.355 00:16:44.355 --- 10.0.0.3 ping statistics --- 00:16:44.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.355 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:44.355 13:59:23 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:44.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:44.355 00:16:44.355 --- 10.0.0.1 ping statistics --- 00:16:44.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.355 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:44.355 13:59:23 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.355 13:59:23 -- nvmf/common.sh@422 -- # return 0 00:16:44.355 13:59:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:44.355 13:59:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.355 13:59:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:44.355 13:59:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:44.355 13:59:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.355 13:59:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:44.355 13:59:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:44.355 13:59:23 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:16:44.355 13:59:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:44.355 13:59:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:44.355 13:59:23 -- common/autotest_common.sh@10 -- # set +x 00:16:44.355 13:59:23 -- nvmf/common.sh@470 -- # nvmfpid=69704 00:16:44.355 13:59:23 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:44.355 13:59:23 -- nvmf/common.sh@471 -- # waitforlisten 69704 00:16:44.355 13:59:23 -- common/autotest_common.sh@817 -- # '[' -z 69704 ']' 00:16:44.355 13:59:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.355 13:59:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:44.355 13:59:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.355 13:59:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:44.355 13:59:23 -- common/autotest_common.sh@10 -- # set +x 00:16:44.614 [2024-04-26 13:59:24.080480] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:44.614 [2024-04-26 13:59:24.080614] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.614 [2024-04-26 13:59:24.255951] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:44.872 [2024-04-26 13:59:24.506229] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.872 [2024-04-26 13:59:24.506320] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.872 [2024-04-26 13:59:24.506349] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.872 [2024-04-26 13:59:24.506397] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.872 [2024-04-26 13:59:24.506419] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.872 [2024-04-26 13:59:24.506903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.872 [2024-04-26 13:59:24.507636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.872 [2024-04-26 13:59:24.508278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:45.440 13:59:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:45.440 13:59:24 -- common/autotest_common.sh@850 -- # return 0 00:16:45.440 13:59:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:45.440 13:59:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:45.440 13:59:24 -- common/autotest_common.sh@10 -- # set +x 00:16:45.440 13:59:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.440 13:59:24 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:16:45.440 13:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.440 13:59:24 -- common/autotest_common.sh@10 -- # set +x 00:16:45.440 [2024-04-26 13:59:25.000742] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:45.440 13:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.440 13:59:25 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:16:45.440 13:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.440 13:59:25 -- common/autotest_common.sh@10 -- # set +x 00:16:45.440 Malloc0 00:16:45.440 13:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.440 13:59:25 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:45.440 13:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.697 13:59:25 -- common/autotest_common.sh@10 -- # set +x 00:16:45.697 Delay0 00:16:45.697 13:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.697 13:59:25 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:45.697 13:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.697 13:59:25 -- common/autotest_common.sh@10 -- # set +x 00:16:45.697 13:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.697 13:59:25 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:16:45.697 13:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.697 13:59:25 -- common/autotest_common.sh@10 -- # set +x 00:16:45.697 13:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.697 13:59:25 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:45.697 13:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.697 13:59:25 -- common/autotest_common.sh@10 -- # set +x 00:16:45.697 [2024-04-26 13:59:25.152041] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:45.697 13:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.697 13:59:25 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:45.697 13:59:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:45.697 13:59:25 -- common/autotest_common.sh@10 -- # set +x 00:16:45.697 13:59:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:45.697 13:59:25 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:16:45.953 [2024-04-26 13:59:25.394945] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:47.846 Initializing NVMe Controllers 00:16:47.846 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:47.846 controller IO queue size 128 less than required 00:16:47.846 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:16:47.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:16:47.846 Initialization complete. Launching workers. 00:16:47.846 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 36558 00:16:47.846 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36619, failed to submit 66 00:16:47.846 success 36558, unsuccess 61, failed 0 00:16:47.846 13:59:27 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:47.846 13:59:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:47.846 13:59:27 -- common/autotest_common.sh@10 -- # set +x 00:16:47.846 13:59:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:47.846 13:59:27 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:16:47.846 13:59:27 -- target/abort.sh@38 -- # nvmftestfini 00:16:47.846 13:59:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:47.846 13:59:27 -- nvmf/common.sh@117 -- # sync 00:16:48.104 13:59:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:48.104 13:59:27 -- nvmf/common.sh@120 -- # set +e 00:16:48.104 13:59:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:48.104 13:59:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:48.104 rmmod nvme_tcp 00:16:48.104 rmmod nvme_fabrics 00:16:48.104 rmmod nvme_keyring 00:16:48.104 13:59:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:48.104 13:59:27 -- nvmf/common.sh@124 -- # set -e 00:16:48.104 13:59:27 -- nvmf/common.sh@125 -- # return 0 00:16:48.104 13:59:27 -- nvmf/common.sh@478 -- # '[' -n 69704 ']' 00:16:48.104 13:59:27 -- nvmf/common.sh@479 -- # killprocess 69704 00:16:48.104 13:59:27 -- common/autotest_common.sh@936 -- # '[' -z 69704 ']' 00:16:48.104 13:59:27 -- common/autotest_common.sh@940 -- # kill -0 69704 00:16:48.104 13:59:27 -- common/autotest_common.sh@941 -- # uname 00:16:48.104 13:59:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:48.104 13:59:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69704 00:16:48.104 13:59:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:48.104 13:59:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:48.104 killing process with pid 69704 00:16:48.104 13:59:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69704' 00:16:48.105 13:59:27 -- common/autotest_common.sh@955 -- # kill 69704 00:16:48.105 13:59:27 -- common/autotest_common.sh@960 -- # wait 69704 00:16:50.006 13:59:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:50.006 13:59:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:50.006 13:59:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:50.006 13:59:29 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:50.006 13:59:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:50.006 13:59:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.006 13:59:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.006 13:59:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.006 13:59:29 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:50.006 00:16:50.006 real 0m5.864s 00:16:50.006 user 0m15.110s 00:16:50.006 sys 0m1.421s 00:16:50.006 13:59:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:50.006 13:59:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.006 ************************************ 00:16:50.006 END TEST nvmf_abort 00:16:50.006 ************************************ 00:16:50.006 13:59:29 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:16:50.006 13:59:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:50.006 13:59:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:50.006 13:59:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.006 ************************************ 00:16:50.006 START TEST nvmf_ns_hotplug_stress 00:16:50.006 ************************************ 00:16:50.006 13:59:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:16:50.006 * Looking for test storage... 00:16:50.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:50.006 13:59:29 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:50.006 13:59:29 -- nvmf/common.sh@7 -- # uname -s 00:16:50.006 13:59:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.006 13:59:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.006 13:59:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.006 13:59:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.006 13:59:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.006 13:59:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.006 13:59:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.006 13:59:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.006 13:59:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.006 13:59:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.006 13:59:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:16:50.006 13:59:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:16:50.006 13:59:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.006 13:59:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.006 13:59:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:50.006 13:59:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.006 13:59:29 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:50.006 13:59:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.006 13:59:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.006 13:59:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.006 13:59:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.006 13:59:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.006 13:59:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.006 13:59:29 -- paths/export.sh@5 -- # export PATH 00:16:50.006 13:59:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.006 13:59:29 -- nvmf/common.sh@47 -- # : 0 00:16:50.006 13:59:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.006 13:59:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.006 13:59:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.006 13:59:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.006 13:59:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.006 13:59:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.006 13:59:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.006 13:59:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.006 13:59:29 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:50.006 13:59:29 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:16:50.006 13:59:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:50.006 13:59:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.006 13:59:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:50.006 13:59:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:50.006 13:59:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:50.006 13:59:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.006 13:59:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.006 13:59:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.006 13:59:29 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:50.006 13:59:29 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:50.006 13:59:29 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:50.006 13:59:29 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:50.006 13:59:29 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:50.007 13:59:29 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:50.007 13:59:29 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.007 13:59:29 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.007 13:59:29 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:50.007 13:59:29 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:50.007 13:59:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:50.007 13:59:29 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:50.007 13:59:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:50.007 13:59:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.007 13:59:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:50.007 13:59:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:50.007 13:59:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:50.007 13:59:29 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:50.007 13:59:29 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:50.007 13:59:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:50.007 Cannot find device "nvmf_tgt_br" 00:16:50.007 13:59:29 -- nvmf/common.sh@155 -- # true 00:16:50.007 13:59:29 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:50.007 Cannot find device "nvmf_tgt_br2" 00:16:50.007 13:59:29 -- nvmf/common.sh@156 -- # true 00:16:50.007 13:59:29 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:50.007 13:59:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:50.007 Cannot find device "nvmf_tgt_br" 00:16:50.007 13:59:29 -- nvmf/common.sh@158 -- # true 00:16:50.007 13:59:29 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:50.007 Cannot find device "nvmf_tgt_br2" 00:16:50.007 13:59:29 -- nvmf/common.sh@159 -- # true 00:16:50.007 13:59:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:50.266 13:59:29 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:50.266 13:59:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:50.266 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.266 13:59:29 -- nvmf/common.sh@162 -- # true 00:16:50.266 13:59:29 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:50.266 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:50.266 13:59:29 -- nvmf/common.sh@163 -- # true 00:16:50.266 13:59:29 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:50.266 13:59:29 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:50.266 13:59:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:50.266 13:59:29 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:50.266 13:59:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:50.266 13:59:29 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:50.266 13:59:29 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:50.266 13:59:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:50.266 13:59:29 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:50.266 13:59:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:50.266 13:59:29 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:50.266 13:59:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:50.266 13:59:29 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:50.266 13:59:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:50.266 13:59:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:50.266 13:59:29 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:50.266 13:59:29 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:50.266 13:59:29 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:50.266 13:59:29 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:50.266 13:59:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:50.266 13:59:29 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:50.266 13:59:29 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:50.266 13:59:29 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:50.266 13:59:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:50.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:16:50.266 00:16:50.266 --- 10.0.0.2 ping statistics --- 00:16:50.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.266 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:50.266 13:59:29 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:50.266 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:50.266 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:16:50.266 00:16:50.266 --- 10.0.0.3 ping statistics --- 00:16:50.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.266 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:50.266 13:59:29 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:50.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:50.266 00:16:50.266 --- 10.0.0.1 ping statistics --- 00:16:50.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.266 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:50.266 13:59:29 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.266 13:59:29 -- nvmf/common.sh@422 -- # return 0 00:16:50.266 13:59:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:50.266 13:59:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.266 13:59:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:50.266 13:59:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:50.266 13:59:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.266 13:59:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:50.266 13:59:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:50.526 13:59:29 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:16:50.526 13:59:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:50.526 13:59:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:50.526 13:59:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.526 13:59:29 -- nvmf/common.sh@470 -- # nvmfpid=69991 00:16:50.526 13:59:29 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:50.526 13:59:29 -- nvmf/common.sh@471 -- # waitforlisten 69991 00:16:50.526 13:59:29 -- common/autotest_common.sh@817 -- # '[' -z 69991 ']' 00:16:50.526 13:59:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.526 13:59:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:50.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.526 13:59:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.526 13:59:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:50.526 13:59:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.526 [2024-04-26 13:59:30.072405] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:16:50.526 [2024-04-26 13:59:30.072526] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.785 [2024-04-26 13:59:30.246429] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:51.043 [2024-04-26 13:59:30.484890] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.044 [2024-04-26 13:59:30.484958] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.044 [2024-04-26 13:59:30.484976] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.044 [2024-04-26 13:59:30.485000] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.044 [2024-04-26 13:59:30.485014] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.044 [2024-04-26 13:59:30.485132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.044 [2024-04-26 13:59:30.485386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.044 [2024-04-26 13:59:30.485421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.302 13:59:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:51.302 13:59:30 -- common/autotest_common.sh@850 -- # return 0 00:16:51.302 13:59:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:51.302 13:59:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:51.302 13:59:30 -- common/autotest_common.sh@10 -- # set +x 00:16:51.302 13:59:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.302 13:59:30 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:16:51.303 13:59:30 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:51.561 [2024-04-26 13:59:31.126842] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.561 13:59:31 -- target/ns_hotplug_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:51.820 13:59:31 -- target/ns_hotplug_stress.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:52.079 [2024-04-26 13:59:31.510744] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.079 13:59:31 -- target/ns_hotplug_stress.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:52.079 13:59:31 -- target/ns_hotplug_stress.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:16:52.337 Malloc0 00:16:52.338 13:59:31 -- target/ns_hotplug_stress.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:52.596 Delay0 00:16:52.596 13:59:32 -- target/ns_hotplug_stress.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:52.855 13:59:32 -- target/ns_hotplug_stress.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:16:53.113 NULL1 00:16:53.113 13:59:32 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:53.373 13:59:32 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:16:53.373 13:59:32 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=70112 00:16:53.373 13:59:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:16:53.373 13:59:32 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:53.632 13:59:33 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:53.632 13:59:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:16:53.632 13:59:33 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:16:53.891 true 00:16:53.891 13:59:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:16:53.891 13:59:33 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:54.150 13:59:33 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:54.409 13:59:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:16:54.409 13:59:33 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:16:54.668 true 00:16:54.668 13:59:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:16:54.668 13:59:34 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.630 Read completed with error (sct=0, sc=11) 00:16:55.630 13:59:35 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:55.888 13:59:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:16:55.888 13:59:35 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:16:55.888 true 00:16:55.888 13:59:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:16:55.888 13:59:35 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:56.147 13:59:35 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:56.405 13:59:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:16:56.405 13:59:35 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:16:56.664 true 00:16:56.664 13:59:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:16:56.664 13:59:36 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:57.601 13:59:37 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:57.860 13:59:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:16:57.860 13:59:37 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:16:57.860 true 00:16:57.860 13:59:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:16:57.860 13:59:37 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:58.119 13:59:37 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:58.409 13:59:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:16:58.409 13:59:37 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:16:58.670 true 00:16:58.670 13:59:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:16:58.670 13:59:38 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:59.605 13:59:39 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:59.863 13:59:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:16:59.863 13:59:39 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:16:59.863 true 00:16:59.863 13:59:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:16:59.863 13:59:39 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.121 13:59:39 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:00.379 13:59:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:17:00.379 13:59:39 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:17:00.638 true 00:17:00.638 13:59:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:00.638 13:59:40 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:01.574 13:59:41 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:01.833 13:59:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:17:01.833 13:59:41 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:17:01.833 true 00:17:02.092 13:59:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:02.092 13:59:41 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:02.092 13:59:41 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:02.350 13:59:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:17:02.350 13:59:41 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:17:02.609 true 00:17:02.609 13:59:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:02.609 13:59:42 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:03.544 13:59:43 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:03.806 13:59:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:17:03.806 13:59:43 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:17:04.065 true 00:17:04.065 13:59:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:04.065 13:59:43 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:04.323 13:59:43 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:04.323 13:59:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:17:04.323 13:59:43 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:17:04.583 true 00:17:04.583 13:59:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:04.583 13:59:44 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:05.519 13:59:45 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:05.778 13:59:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:17:05.778 13:59:45 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:17:06.036 true 00:17:06.036 13:59:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:06.036 13:59:45 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:06.294 13:59:45 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:06.294 13:59:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:17:06.294 13:59:45 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:17:06.553 true 00:17:06.553 13:59:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:06.553 13:59:46 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.486 13:59:47 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:07.745 13:59:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:17:07.745 13:59:47 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:17:08.004 true 00:17:08.004 13:59:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:08.004 13:59:47 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:08.261 13:59:47 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:08.519 13:59:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:17:08.519 13:59:47 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:17:08.519 true 00:17:08.519 13:59:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:08.519 13:59:48 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:09.901 13:59:49 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:09.901 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:09.901 13:59:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:17:09.901 13:59:49 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:17:09.901 true 00:17:10.159 13:59:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:10.159 13:59:49 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:10.418 13:59:49 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:10.418 13:59:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:17:10.418 13:59:50 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:17:10.674 true 00:17:10.674 13:59:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:10.674 13:59:50 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:11.607 13:59:51 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:11.865 13:59:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:17:11.865 13:59:51 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:17:11.865 true 00:17:12.123 13:59:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:12.123 13:59:51 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:12.123 13:59:51 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:12.382 13:59:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:17:12.382 13:59:51 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:17:12.641 true 00:17:12.641 13:59:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:12.641 13:59:52 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:13.576 13:59:53 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:13.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:13.576 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:13.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:13.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:13.833 13:59:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:17:13.833 13:59:53 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:17:14.091 true 00:17:14.091 13:59:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:14.091 13:59:53 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:15.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:15.032 13:59:54 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:15.032 13:59:54 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:17:15.032 13:59:54 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:17:15.290 true 00:17:15.290 13:59:54 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:15.290 13:59:54 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:15.549 13:59:55 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:15.807 13:59:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:17:15.807 13:59:55 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:17:15.807 true 00:17:15.807 13:59:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:15.807 13:59:55 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:16.800 13:59:56 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:17.058 13:59:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:17:17.058 13:59:56 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:17:17.316 true 00:17:17.316 13:59:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:17.316 13:59:56 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:17.316 13:59:56 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:17.574 13:59:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:17:17.574 13:59:57 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:17:17.833 true 00:17:17.833 13:59:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:17.833 13:59:57 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:18.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:18.770 13:59:58 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:19.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:19.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:19.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:19.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:19.028 13:59:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:17:19.028 13:59:58 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:17:19.286 true 00:17:19.286 13:59:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:19.286 13:59:58 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:20.233 13:59:59 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:20.233 13:59:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:17:20.233 13:59:59 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:17:20.492 true 00:17:20.492 14:00:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:20.492 14:00:00 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:20.751 14:00:00 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:20.751 14:00:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:17:20.751 14:00:00 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:17:21.011 true 00:17:21.011 14:00:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:21.011 14:00:00 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:22.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:22.402 14:00:01 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:22.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:22.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:22.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:22.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:22.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:17:22.402 14:00:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:17:22.402 14:00:01 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:17:22.661 true 00:17:22.661 14:00:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:22.661 14:00:02 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:23.224 14:00:02 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:23.487 Initializing NVMe Controllers 00:17:23.487 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:23.487 Controller IO queue size 128, less than required. 00:17:23.487 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:23.487 Controller IO queue size 128, less than required. 00:17:23.487 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:23.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:23.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:23.487 Initialization complete. Launching workers. 00:17:23.487 ======================================================== 00:17:23.487 Latency(us) 00:17:23.487 Device Information : IOPS MiB/s Average min max 00:17:23.487 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 645.41 0.32 115756.06 2715.40 1026876.64 00:17:23.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12334.95 6.02 10348.85 3385.97 484568.33 00:17:23.488 ======================================================== 00:17:23.488 Total : 12980.36 6.34 15589.88 2715.40 1026876.64 00:17:23.488 00:17:23.755 14:00:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:17:23.755 14:00:03 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:17:23.755 true 00:17:23.755 14:00:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 70112 00:17:23.755 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (70112) - No such process 00:17:23.755 14:00:03 -- target/ns_hotplug_stress.sh@44 -- # wait 70112 00:17:23.755 14:00:03 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:23.755 14:00:03 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:17:23.755 14:00:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:23.755 14:00:03 -- nvmf/common.sh@117 -- # sync 00:17:23.755 14:00:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:23.755 14:00:03 -- nvmf/common.sh@120 -- # set +e 00:17:23.755 14:00:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:23.755 14:00:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:23.755 rmmod nvme_tcp 00:17:24.012 rmmod nvme_fabrics 00:17:24.012 rmmod nvme_keyring 00:17:24.012 14:00:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:24.012 14:00:03 -- nvmf/common.sh@124 -- # set -e 00:17:24.012 14:00:03 -- nvmf/common.sh@125 -- # return 0 00:17:24.012 14:00:03 -- nvmf/common.sh@478 -- # '[' -n 69991 ']' 00:17:24.012 14:00:03 -- nvmf/common.sh@479 -- # killprocess 69991 00:17:24.012 14:00:03 -- common/autotest_common.sh@936 -- # '[' -z 69991 ']' 00:17:24.012 14:00:03 -- common/autotest_common.sh@940 -- # kill -0 69991 00:17:24.012 14:00:03 -- common/autotest_common.sh@941 -- # uname 00:17:24.012 14:00:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:24.012 14:00:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69991 00:17:24.012 killing process with pid 69991 00:17:24.012 14:00:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:24.012 14:00:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:24.012 14:00:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69991' 00:17:24.012 14:00:03 -- common/autotest_common.sh@955 -- # kill 69991 00:17:24.012 14:00:03 -- common/autotest_common.sh@960 -- # wait 69991 00:17:25.386 14:00:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:25.386 14:00:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:25.386 14:00:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:25.386 14:00:04 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:25.386 14:00:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:25.386 14:00:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.386 14:00:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.386 14:00:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.386 14:00:05 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:25.386 00:17:25.386 real 0m35.664s 00:17:25.386 user 2m23.561s 00:17:25.386 sys 0m9.862s 00:17:25.386 14:00:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:25.386 ************************************ 00:17:25.386 END TEST nvmf_ns_hotplug_stress 00:17:25.386 ************************************ 00:17:25.386 14:00:05 -- common/autotest_common.sh@10 -- # set +x 00:17:25.643 14:00:05 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:25.643 14:00:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:25.643 14:00:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:25.643 14:00:05 -- common/autotest_common.sh@10 -- # set +x 00:17:25.643 ************************************ 00:17:25.643 START TEST nvmf_connect_stress 00:17:25.643 ************************************ 00:17:25.643 14:00:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:25.901 * Looking for test storage... 00:17:25.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:25.901 14:00:05 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:25.901 14:00:05 -- nvmf/common.sh@7 -- # uname -s 00:17:25.901 14:00:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.901 14:00:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.901 14:00:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.901 14:00:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.901 14:00:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.901 14:00:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.901 14:00:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.901 14:00:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.901 14:00:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.901 14:00:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.901 14:00:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:17:25.901 14:00:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:17:25.901 14:00:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.901 14:00:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.901 14:00:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:25.901 14:00:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.901 14:00:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:25.901 14:00:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.901 14:00:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.901 14:00:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.901 14:00:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.901 14:00:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.901 14:00:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.901 14:00:05 -- paths/export.sh@5 -- # export PATH 00:17:25.901 14:00:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.901 14:00:05 -- nvmf/common.sh@47 -- # : 0 00:17:25.901 14:00:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:25.901 14:00:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:25.901 14:00:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.901 14:00:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.901 14:00:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.901 14:00:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:25.901 14:00:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:25.901 14:00:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:25.901 14:00:05 -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:25.901 14:00:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:25.901 14:00:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.901 14:00:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:25.901 14:00:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:25.901 14:00:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:25.901 14:00:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.901 14:00:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:25.901 14:00:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.901 14:00:05 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:25.901 14:00:05 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:25.901 14:00:05 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:25.901 14:00:05 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:25.901 14:00:05 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:25.901 14:00:05 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:25.901 14:00:05 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:25.901 14:00:05 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:25.901 14:00:05 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:25.901 14:00:05 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:25.901 14:00:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:25.901 14:00:05 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:25.901 14:00:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:25.901 14:00:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:25.901 14:00:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:25.901 14:00:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:25.901 14:00:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:25.901 14:00:05 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:25.901 14:00:05 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:25.901 14:00:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:25.901 Cannot find device "nvmf_tgt_br" 00:17:25.901 14:00:05 -- nvmf/common.sh@155 -- # true 00:17:25.901 14:00:05 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:25.901 Cannot find device "nvmf_tgt_br2" 00:17:25.901 14:00:05 -- nvmf/common.sh@156 -- # true 00:17:25.901 14:00:05 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:25.901 14:00:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:25.901 Cannot find device "nvmf_tgt_br" 00:17:25.901 14:00:05 -- nvmf/common.sh@158 -- # true 00:17:25.901 14:00:05 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:25.901 Cannot find device "nvmf_tgt_br2" 00:17:25.901 14:00:05 -- nvmf/common.sh@159 -- # true 00:17:25.901 14:00:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:25.901 14:00:05 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:25.901 14:00:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:25.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:25.901 14:00:05 -- nvmf/common.sh@162 -- # true 00:17:25.901 14:00:05 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:25.901 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:25.901 14:00:05 -- nvmf/common.sh@163 -- # true 00:17:25.901 14:00:05 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:25.901 14:00:05 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:25.901 14:00:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:26.158 14:00:05 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:26.158 14:00:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:26.158 14:00:05 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:26.158 14:00:05 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:26.158 14:00:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:26.158 14:00:05 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:26.158 14:00:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:26.158 14:00:05 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:26.159 14:00:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:26.159 14:00:05 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:26.159 14:00:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:26.159 14:00:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:26.159 14:00:05 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:26.159 14:00:05 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:26.159 14:00:05 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:26.159 14:00:05 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:26.159 14:00:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:26.159 14:00:05 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:26.159 14:00:05 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:26.159 14:00:05 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:26.159 14:00:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:26.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:17:26.159 00:17:26.159 --- 10.0.0.2 ping statistics --- 00:17:26.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.159 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:17:26.159 14:00:05 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:26.159 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:26.159 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:17:26.159 00:17:26.159 --- 10.0.0.3 ping statistics --- 00:17:26.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.159 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:17:26.159 14:00:05 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:26.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:17:26.159 00:17:26.159 --- 10.0.0.1 ping statistics --- 00:17:26.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.159 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:26.159 14:00:05 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.159 14:00:05 -- nvmf/common.sh@422 -- # return 0 00:17:26.159 14:00:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:26.159 14:00:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.159 14:00:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:26.159 14:00:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:26.159 14:00:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.159 14:00:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:26.159 14:00:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:26.159 14:00:05 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:26.159 14:00:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:26.159 14:00:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:26.159 14:00:05 -- common/autotest_common.sh@10 -- # set +x 00:17:26.159 14:00:05 -- nvmf/common.sh@470 -- # nvmfpid=71281 00:17:26.159 14:00:05 -- nvmf/common.sh@471 -- # waitforlisten 71281 00:17:26.159 14:00:05 -- common/autotest_common.sh@817 -- # '[' -z 71281 ']' 00:17:26.159 14:00:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.159 14:00:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:26.159 14:00:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.159 14:00:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:26.159 14:00:05 -- common/autotest_common.sh@10 -- # set +x 00:17:26.159 14:00:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:26.416 [2024-04-26 14:00:05.920693] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:26.416 [2024-04-26 14:00:05.920811] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:26.675 [2024-04-26 14:00:06.096774] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:26.675 [2024-04-26 14:00:06.333681] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.675 [2024-04-26 14:00:06.333740] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.675 [2024-04-26 14:00:06.333756] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.675 [2024-04-26 14:00:06.333778] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.675 [2024-04-26 14:00:06.333792] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.675 [2024-04-26 14:00:06.334064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.675 [2024-04-26 14:00:06.334088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.675 [2024-04-26 14:00:06.334352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:27.242 14:00:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:27.242 14:00:06 -- common/autotest_common.sh@850 -- # return 0 00:17:27.242 14:00:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:27.242 14:00:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:27.242 14:00:06 -- common/autotest_common.sh@10 -- # set +x 00:17:27.242 14:00:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.242 14:00:06 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:27.242 14:00:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.242 14:00:06 -- common/autotest_common.sh@10 -- # set +x 00:17:27.242 [2024-04-26 14:00:06.818498] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.242 14:00:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.242 14:00:06 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:27.242 14:00:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.242 14:00:06 -- common/autotest_common.sh@10 -- # set +x 00:17:27.242 14:00:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.242 14:00:06 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.242 14:00:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.242 14:00:06 -- common/autotest_common.sh@10 -- # set +x 00:17:27.242 [2024-04-26 14:00:06.838639] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.242 14:00:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.242 14:00:06 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:27.242 14:00:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.242 14:00:06 -- common/autotest_common.sh@10 -- # set +x 00:17:27.242 NULL1 00:17:27.242 14:00:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.243 14:00:06 -- target/connect_stress.sh@21 -- # PERF_PID=71333 00:17:27.243 14:00:06 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:27.243 14:00:06 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:17:27.243 14:00:06 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:17:27.243 14:00:06 -- target/connect_stress.sh@27 -- # seq 1 20 00:17:27.243 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.243 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.243 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.243 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.243 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.243 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.243 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.243 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.243 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.243 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.243 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.243 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.243 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.243 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.243 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.243 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.243 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.243 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.500 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.500 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.500 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.500 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.500 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.500 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.500 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.500 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.500 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.500 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.500 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.500 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.500 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.500 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.500 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.500 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.500 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.500 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.500 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.500 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.500 14:00:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:27.500 14:00:06 -- target/connect_stress.sh@28 -- # cat 00:17:27.500 14:00:06 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:27.500 14:00:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.500 14:00:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.500 14:00:06 -- common/autotest_common.sh@10 -- # set +x 00:17:27.757 14:00:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.757 14:00:07 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:27.757 14:00:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.757 14:00:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.757 14:00:07 -- common/autotest_common.sh@10 -- # set +x 00:17:28.016 14:00:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.016 14:00:07 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:28.016 14:00:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.016 14:00:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.016 14:00:07 -- common/autotest_common.sh@10 -- # set +x 00:17:28.580 14:00:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.580 14:00:07 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:28.580 14:00:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.580 14:00:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.580 14:00:07 -- common/autotest_common.sh@10 -- # set +x 00:17:28.838 14:00:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:28.838 14:00:08 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:28.838 14:00:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.838 14:00:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:28.838 14:00:08 -- common/autotest_common.sh@10 -- # set +x 00:17:29.096 14:00:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.096 14:00:08 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:29.096 14:00:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.096 14:00:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.096 14:00:08 -- common/autotest_common.sh@10 -- # set +x 00:17:29.354 14:00:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.354 14:00:08 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:29.354 14:00:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.354 14:00:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.354 14:00:08 -- common/autotest_common.sh@10 -- # set +x 00:17:29.618 14:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:29.618 14:00:09 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:29.618 14:00:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.618 14:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:29.618 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:17:30.184 14:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:30.184 14:00:09 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:30.184 14:00:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.184 14:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:30.184 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:17:30.441 14:00:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:30.441 14:00:09 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:30.441 14:00:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.441 14:00:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:30.441 14:00:09 -- common/autotest_common.sh@10 -- # set +x 00:17:30.701 14:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:30.701 14:00:10 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:30.701 14:00:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.701 14:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:30.701 14:00:10 -- common/autotest_common.sh@10 -- # set +x 00:17:30.959 14:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:30.959 14:00:10 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:30.959 14:00:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.959 14:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:30.959 14:00:10 -- common/autotest_common.sh@10 -- # set +x 00:17:31.524 14:00:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.524 14:00:10 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:31.524 14:00:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.524 14:00:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.524 14:00:10 -- common/autotest_common.sh@10 -- # set +x 00:17:31.781 14:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.781 14:00:11 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:31.781 14:00:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.781 14:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.781 14:00:11 -- common/autotest_common.sh@10 -- # set +x 00:17:32.039 14:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:32.039 14:00:11 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:32.039 14:00:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.039 14:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:32.039 14:00:11 -- common/autotest_common.sh@10 -- # set +x 00:17:32.296 14:00:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:32.296 14:00:11 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:32.296 14:00:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.296 14:00:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:32.296 14:00:11 -- common/autotest_common.sh@10 -- # set +x 00:17:32.680 14:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:32.680 14:00:12 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:32.680 14:00:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.680 14:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:32.680 14:00:12 -- common/autotest_common.sh@10 -- # set +x 00:17:32.936 14:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:32.936 14:00:12 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:32.936 14:00:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.936 14:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:32.936 14:00:12 -- common/autotest_common.sh@10 -- # set +x 00:17:33.502 14:00:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:33.502 14:00:12 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:33.502 14:00:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.502 14:00:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:33.502 14:00:12 -- common/autotest_common.sh@10 -- # set +x 00:17:33.762 14:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:33.762 14:00:13 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:33.762 14:00:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.762 14:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:33.762 14:00:13 -- common/autotest_common.sh@10 -- # set +x 00:17:34.022 14:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.022 14:00:13 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:34.022 14:00:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.022 14:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:34.022 14:00:13 -- common/autotest_common.sh@10 -- # set +x 00:17:34.289 14:00:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.289 14:00:13 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:34.289 14:00:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.289 14:00:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:34.289 14:00:13 -- common/autotest_common.sh@10 -- # set +x 00:17:34.871 14:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:34.871 14:00:14 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:34.871 14:00:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.871 14:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:34.871 14:00:14 -- common/autotest_common.sh@10 -- # set +x 00:17:35.129 14:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.129 14:00:14 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:35.129 14:00:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.129 14:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.129 14:00:14 -- common/autotest_common.sh@10 -- # set +x 00:17:35.388 14:00:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.388 14:00:14 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:35.388 14:00:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.388 14:00:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.388 14:00:14 -- common/autotest_common.sh@10 -- # set +x 00:17:35.647 14:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.647 14:00:15 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:35.647 14:00:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.647 14:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.647 14:00:15 -- common/autotest_common.sh@10 -- # set +x 00:17:35.906 14:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:35.906 14:00:15 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:35.906 14:00:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.906 14:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:35.906 14:00:15 -- common/autotest_common.sh@10 -- # set +x 00:17:36.474 14:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.474 14:00:15 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:36.474 14:00:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.474 14:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.474 14:00:15 -- common/autotest_common.sh@10 -- # set +x 00:17:36.732 14:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.732 14:00:16 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:36.732 14:00:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.732 14:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.732 14:00:16 -- common/autotest_common.sh@10 -- # set +x 00:17:36.991 14:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.991 14:00:16 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:36.991 14:00:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.991 14:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.991 14:00:16 -- common/autotest_common.sh@10 -- # set +x 00:17:37.248 14:00:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.248 14:00:16 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:37.248 14:00:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:37.248 14:00:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:37.248 14:00:16 -- common/autotest_common.sh@10 -- # set +x 00:17:37.511 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:37.769 14:00:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:37.769 14:00:17 -- target/connect_stress.sh@34 -- # kill -0 71333 00:17:37.769 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (71333) - No such process 00:17:37.769 14:00:17 -- target/connect_stress.sh@38 -- # wait 71333 00:17:37.769 14:00:17 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:17:37.769 14:00:17 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:37.769 14:00:17 -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:37.769 14:00:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:37.769 14:00:17 -- nvmf/common.sh@117 -- # sync 00:17:37.769 14:00:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:37.769 14:00:17 -- nvmf/common.sh@120 -- # set +e 00:17:37.769 14:00:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:37.770 14:00:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:37.770 rmmod nvme_tcp 00:17:37.770 rmmod nvme_fabrics 00:17:37.770 rmmod nvme_keyring 00:17:37.770 14:00:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:37.770 14:00:17 -- nvmf/common.sh@124 -- # set -e 00:17:37.770 14:00:17 -- nvmf/common.sh@125 -- # return 0 00:17:37.770 14:00:17 -- nvmf/common.sh@478 -- # '[' -n 71281 ']' 00:17:37.770 14:00:17 -- nvmf/common.sh@479 -- # killprocess 71281 00:17:37.770 14:00:17 -- common/autotest_common.sh@936 -- # '[' -z 71281 ']' 00:17:37.770 14:00:17 -- common/autotest_common.sh@940 -- # kill -0 71281 00:17:37.770 14:00:17 -- common/autotest_common.sh@941 -- # uname 00:17:37.770 14:00:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:37.770 14:00:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71281 00:17:37.770 killing process with pid 71281 00:17:37.770 14:00:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:37.770 14:00:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:37.770 14:00:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71281' 00:17:37.770 14:00:17 -- common/autotest_common.sh@955 -- # kill 71281 00:17:37.770 14:00:17 -- common/autotest_common.sh@960 -- # wait 71281 00:17:39.161 14:00:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:39.161 14:00:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:39.161 14:00:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:39.161 14:00:18 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:39.161 14:00:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:39.161 14:00:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.161 14:00:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.161 14:00:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.161 14:00:18 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:39.161 00:17:39.161 real 0m13.538s 00:17:39.161 user 0m42.455s 00:17:39.161 sys 0m4.303s 00:17:39.161 14:00:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:39.161 ************************************ 00:17:39.161 END TEST nvmf_connect_stress 00:17:39.161 ************************************ 00:17:39.161 14:00:18 -- common/autotest_common.sh@10 -- # set +x 00:17:39.161 14:00:18 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:39.161 14:00:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:39.161 14:00:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:39.161 14:00:18 -- common/autotest_common.sh@10 -- # set +x 00:17:39.422 ************************************ 00:17:39.422 START TEST nvmf_fused_ordering 00:17:39.422 ************************************ 00:17:39.422 14:00:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:39.422 * Looking for test storage... 00:17:39.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:39.422 14:00:19 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:39.422 14:00:19 -- nvmf/common.sh@7 -- # uname -s 00:17:39.422 14:00:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.422 14:00:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.422 14:00:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.422 14:00:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.422 14:00:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.422 14:00:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.422 14:00:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.422 14:00:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.422 14:00:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.422 14:00:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.422 14:00:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:17:39.422 14:00:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:17:39.422 14:00:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.422 14:00:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.422 14:00:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:39.422 14:00:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.422 14:00:19 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:39.422 14:00:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.422 14:00:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.422 14:00:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.422 14:00:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.422 14:00:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.422 14:00:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.422 14:00:19 -- paths/export.sh@5 -- # export PATH 00:17:39.422 14:00:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.422 14:00:19 -- nvmf/common.sh@47 -- # : 0 00:17:39.422 14:00:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:39.422 14:00:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:39.422 14:00:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.422 14:00:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.422 14:00:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.422 14:00:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:39.422 14:00:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:39.422 14:00:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:39.422 14:00:19 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:39.422 14:00:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:39.682 14:00:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.682 14:00:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:39.682 14:00:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:39.682 14:00:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:39.682 14:00:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.682 14:00:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.682 14:00:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.682 14:00:19 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:39.682 14:00:19 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:39.682 14:00:19 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:39.682 14:00:19 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:39.682 14:00:19 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:39.682 14:00:19 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:39.682 14:00:19 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.682 14:00:19 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.682 14:00:19 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:39.682 14:00:19 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:39.682 14:00:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:39.682 14:00:19 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:39.682 14:00:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:39.682 14:00:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.682 14:00:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:39.682 14:00:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:39.682 14:00:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:39.682 14:00:19 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:39.682 14:00:19 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:39.682 14:00:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:39.682 Cannot find device "nvmf_tgt_br" 00:17:39.682 14:00:19 -- nvmf/common.sh@155 -- # true 00:17:39.682 14:00:19 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:39.682 Cannot find device "nvmf_tgt_br2" 00:17:39.682 14:00:19 -- nvmf/common.sh@156 -- # true 00:17:39.682 14:00:19 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:39.682 14:00:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:39.682 Cannot find device "nvmf_tgt_br" 00:17:39.682 14:00:19 -- nvmf/common.sh@158 -- # true 00:17:39.682 14:00:19 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:39.682 Cannot find device "nvmf_tgt_br2" 00:17:39.682 14:00:19 -- nvmf/common.sh@159 -- # true 00:17:39.682 14:00:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:39.682 14:00:19 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:39.682 14:00:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:39.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:39.682 14:00:19 -- nvmf/common.sh@162 -- # true 00:17:39.682 14:00:19 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:39.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:39.682 14:00:19 -- nvmf/common.sh@163 -- # true 00:17:39.682 14:00:19 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:39.682 14:00:19 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:39.682 14:00:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:39.682 14:00:19 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:39.682 14:00:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:39.682 14:00:19 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:39.682 14:00:19 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:39.682 14:00:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:39.682 14:00:19 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:39.939 14:00:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:39.939 14:00:19 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:39.939 14:00:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:39.939 14:00:19 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:39.939 14:00:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:39.939 14:00:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:39.939 14:00:19 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:39.940 14:00:19 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:39.940 14:00:19 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:39.940 14:00:19 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:39.940 14:00:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:39.940 14:00:19 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:39.940 14:00:19 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:39.940 14:00:19 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:39.940 14:00:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:39.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:17:39.940 00:17:39.940 --- 10.0.0.2 ping statistics --- 00:17:39.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.940 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:39.940 14:00:19 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:39.940 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:39.940 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:17:39.940 00:17:39.940 --- 10.0.0.3 ping statistics --- 00:17:39.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.940 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:17:39.940 14:00:19 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:39.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:39.940 00:17:39.940 --- 10.0.0.1 ping statistics --- 00:17:39.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.940 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:39.940 14:00:19 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.940 14:00:19 -- nvmf/common.sh@422 -- # return 0 00:17:39.940 14:00:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:39.940 14:00:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.940 14:00:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:39.940 14:00:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:39.940 14:00:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.940 14:00:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:39.940 14:00:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:39.940 14:00:19 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:39.940 14:00:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:39.940 14:00:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:39.940 14:00:19 -- common/autotest_common.sh@10 -- # set +x 00:17:39.940 14:00:19 -- nvmf/common.sh@470 -- # nvmfpid=71682 00:17:39.940 14:00:19 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:39.940 14:00:19 -- nvmf/common.sh@471 -- # waitforlisten 71682 00:17:39.940 14:00:19 -- common/autotest_common.sh@817 -- # '[' -z 71682 ']' 00:17:39.940 14:00:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.940 14:00:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:39.940 14:00:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.940 14:00:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:39.940 14:00:19 -- common/autotest_common.sh@10 -- # set +x 00:17:40.198 [2024-04-26 14:00:19.618699] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:40.198 [2024-04-26 14:00:19.618813] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.198 [2024-04-26 14:00:19.796892] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.456 [2024-04-26 14:00:20.035119] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.456 [2024-04-26 14:00:20.035180] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.456 [2024-04-26 14:00:20.035196] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.456 [2024-04-26 14:00:20.035219] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.456 [2024-04-26 14:00:20.035231] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.456 [2024-04-26 14:00:20.035272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.024 14:00:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:41.024 14:00:20 -- common/autotest_common.sh@850 -- # return 0 00:17:41.024 14:00:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:41.024 14:00:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:41.024 14:00:20 -- common/autotest_common.sh@10 -- # set +x 00:17:41.024 14:00:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:41.024 14:00:20 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:41.024 14:00:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:41.024 14:00:20 -- common/autotest_common.sh@10 -- # set +x 00:17:41.024 [2024-04-26 14:00:20.520817] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.024 14:00:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:41.024 14:00:20 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:41.024 14:00:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:41.024 14:00:20 -- common/autotest_common.sh@10 -- # set +x 00:17:41.024 14:00:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:41.024 14:00:20 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:41.024 14:00:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:41.024 14:00:20 -- common/autotest_common.sh@10 -- # set +x 00:17:41.024 [2024-04-26 14:00:20.544927] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.024 14:00:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:41.024 14:00:20 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:41.024 14:00:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:41.024 14:00:20 -- common/autotest_common.sh@10 -- # set +x 00:17:41.024 NULL1 00:17:41.024 14:00:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:41.024 14:00:20 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:41.024 14:00:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:41.025 14:00:20 -- common/autotest_common.sh@10 -- # set +x 00:17:41.025 14:00:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:41.025 14:00:20 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:41.025 14:00:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:41.025 14:00:20 -- common/autotest_common.sh@10 -- # set +x 00:17:41.025 14:00:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:41.025 14:00:20 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:41.025 [2024-04-26 14:00:20.636598] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:41.025 [2024-04-26 14:00:20.636667] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71732 ] 00:17:41.678 Attached to nqn.2016-06.io.spdk:cnode1 00:17:41.678 Namespace ID: 1 size: 1GB 00:17:41.678 fused_ordering(0) 00:17:41.678 fused_ordering(1) 00:17:41.678 fused_ordering(2) 00:17:41.678 fused_ordering(3) 00:17:41.678 fused_ordering(4) 00:17:41.678 fused_ordering(5) 00:17:41.678 fused_ordering(6) 00:17:41.678 fused_ordering(7) 00:17:41.678 fused_ordering(8) 00:17:41.678 fused_ordering(9) 00:17:41.678 fused_ordering(10) 00:17:41.678 fused_ordering(11) 00:17:41.678 fused_ordering(12) 00:17:41.678 fused_ordering(13) 00:17:41.678 fused_ordering(14) 00:17:41.678 fused_ordering(15) 00:17:41.678 fused_ordering(16) 00:17:41.678 fused_ordering(17) 00:17:41.678 fused_ordering(18) 00:17:41.678 fused_ordering(19) 00:17:41.678 fused_ordering(20) 00:17:41.678 fused_ordering(21) 00:17:41.678 fused_ordering(22) 00:17:41.678 fused_ordering(23) 00:17:41.678 fused_ordering(24) 00:17:41.678 fused_ordering(25) 00:17:41.678 fused_ordering(26) 00:17:41.678 fused_ordering(27) 00:17:41.678 fused_ordering(28) 00:17:41.678 fused_ordering(29) 00:17:41.678 fused_ordering(30) 00:17:41.678 fused_ordering(31) 00:17:41.678 fused_ordering(32) 00:17:41.678 fused_ordering(33) 00:17:41.678 fused_ordering(34) 00:17:41.678 fused_ordering(35) 00:17:41.678 fused_ordering(36) 00:17:41.678 fused_ordering(37) 00:17:41.678 fused_ordering(38) 00:17:41.678 fused_ordering(39) 00:17:41.678 fused_ordering(40) 00:17:41.678 fused_ordering(41) 00:17:41.678 fused_ordering(42) 00:17:41.678 fused_ordering(43) 00:17:41.678 fused_ordering(44) 00:17:41.678 fused_ordering(45) 00:17:41.678 fused_ordering(46) 00:17:41.678 fused_ordering(47) 00:17:41.678 fused_ordering(48) 00:17:41.678 fused_ordering(49) 00:17:41.678 fused_ordering(50) 00:17:41.678 fused_ordering(51) 00:17:41.678 fused_ordering(52) 00:17:41.678 fused_ordering(53) 00:17:41.678 fused_ordering(54) 00:17:41.678 fused_ordering(55) 00:17:41.678 fused_ordering(56) 00:17:41.678 fused_ordering(57) 00:17:41.678 fused_ordering(58) 00:17:41.678 fused_ordering(59) 00:17:41.678 fused_ordering(60) 00:17:41.678 fused_ordering(61) 00:17:41.678 fused_ordering(62) 00:17:41.678 fused_ordering(63) 00:17:41.678 fused_ordering(64) 00:17:41.678 fused_ordering(65) 00:17:41.678 fused_ordering(66) 00:17:41.678 fused_ordering(67) 00:17:41.678 fused_ordering(68) 00:17:41.678 fused_ordering(69) 00:17:41.678 fused_ordering(70) 00:17:41.678 fused_ordering(71) 00:17:41.678 fused_ordering(72) 00:17:41.678 fused_ordering(73) 00:17:41.678 fused_ordering(74) 00:17:41.678 fused_ordering(75) 00:17:41.678 fused_ordering(76) 00:17:41.678 fused_ordering(77) 00:17:41.678 fused_ordering(78) 00:17:41.678 fused_ordering(79) 00:17:41.678 fused_ordering(80) 00:17:41.678 fused_ordering(81) 00:17:41.678 fused_ordering(82) 00:17:41.678 fused_ordering(83) 00:17:41.678 fused_ordering(84) 00:17:41.678 fused_ordering(85) 00:17:41.678 fused_ordering(86) 00:17:41.678 fused_ordering(87) 00:17:41.678 fused_ordering(88) 00:17:41.678 fused_ordering(89) 00:17:41.678 fused_ordering(90) 00:17:41.678 fused_ordering(91) 00:17:41.678 fused_ordering(92) 00:17:41.678 fused_ordering(93) 00:17:41.678 fused_ordering(94) 00:17:41.678 fused_ordering(95) 00:17:41.678 fused_ordering(96) 00:17:41.678 fused_ordering(97) 00:17:41.678 fused_ordering(98) 00:17:41.678 fused_ordering(99) 00:17:41.678 fused_ordering(100) 00:17:41.678 fused_ordering(101) 00:17:41.678 fused_ordering(102) 00:17:41.678 fused_ordering(103) 00:17:41.678 fused_ordering(104) 00:17:41.678 fused_ordering(105) 00:17:41.678 fused_ordering(106) 00:17:41.678 fused_ordering(107) 00:17:41.678 fused_ordering(108) 00:17:41.678 fused_ordering(109) 00:17:41.678 fused_ordering(110) 00:17:41.678 fused_ordering(111) 00:17:41.678 fused_ordering(112) 00:17:41.678 fused_ordering(113) 00:17:41.678 fused_ordering(114) 00:17:41.678 fused_ordering(115) 00:17:41.678 fused_ordering(116) 00:17:41.678 fused_ordering(117) 00:17:41.678 fused_ordering(118) 00:17:41.678 fused_ordering(119) 00:17:41.679 fused_ordering(120) 00:17:41.679 fused_ordering(121) 00:17:41.679 fused_ordering(122) 00:17:41.679 fused_ordering(123) 00:17:41.679 fused_ordering(124) 00:17:41.679 fused_ordering(125) 00:17:41.679 fused_ordering(126) 00:17:41.679 fused_ordering(127) 00:17:41.679 fused_ordering(128) 00:17:41.679 fused_ordering(129) 00:17:41.679 fused_ordering(130) 00:17:41.679 fused_ordering(131) 00:17:41.679 fused_ordering(132) 00:17:41.679 fused_ordering(133) 00:17:41.679 fused_ordering(134) 00:17:41.679 fused_ordering(135) 00:17:41.679 fused_ordering(136) 00:17:41.679 fused_ordering(137) 00:17:41.679 fused_ordering(138) 00:17:41.679 fused_ordering(139) 00:17:41.679 fused_ordering(140) 00:17:41.679 fused_ordering(141) 00:17:41.679 fused_ordering(142) 00:17:41.679 fused_ordering(143) 00:17:41.679 fused_ordering(144) 00:17:41.679 fused_ordering(145) 00:17:41.679 fused_ordering(146) 00:17:41.679 fused_ordering(147) 00:17:41.679 fused_ordering(148) 00:17:41.679 fused_ordering(149) 00:17:41.679 fused_ordering(150) 00:17:41.679 fused_ordering(151) 00:17:41.679 fused_ordering(152) 00:17:41.679 fused_ordering(153) 00:17:41.679 fused_ordering(154) 00:17:41.679 fused_ordering(155) 00:17:41.679 fused_ordering(156) 00:17:41.679 fused_ordering(157) 00:17:41.679 fused_ordering(158) 00:17:41.679 fused_ordering(159) 00:17:41.679 fused_ordering(160) 00:17:41.679 fused_ordering(161) 00:17:41.679 fused_ordering(162) 00:17:41.679 fused_ordering(163) 00:17:41.679 fused_ordering(164) 00:17:41.679 fused_ordering(165) 00:17:41.679 fused_ordering(166) 00:17:41.679 fused_ordering(167) 00:17:41.679 fused_ordering(168) 00:17:41.679 fused_ordering(169) 00:17:41.679 fused_ordering(170) 00:17:41.679 fused_ordering(171) 00:17:41.679 fused_ordering(172) 00:17:41.679 fused_ordering(173) 00:17:41.679 fused_ordering(174) 00:17:41.679 fused_ordering(175) 00:17:41.679 fused_ordering(176) 00:17:41.679 fused_ordering(177) 00:17:41.679 fused_ordering(178) 00:17:41.679 fused_ordering(179) 00:17:41.679 fused_ordering(180) 00:17:41.679 fused_ordering(181) 00:17:41.679 fused_ordering(182) 00:17:41.679 fused_ordering(183) 00:17:41.679 fused_ordering(184) 00:17:41.679 fused_ordering(185) 00:17:41.679 fused_ordering(186) 00:17:41.679 fused_ordering(187) 00:17:41.679 fused_ordering(188) 00:17:41.679 fused_ordering(189) 00:17:41.679 fused_ordering(190) 00:17:41.679 fused_ordering(191) 00:17:41.679 fused_ordering(192) 00:17:41.679 fused_ordering(193) 00:17:41.679 fused_ordering(194) 00:17:41.679 fused_ordering(195) 00:17:41.679 fused_ordering(196) 00:17:41.679 fused_ordering(197) 00:17:41.679 fused_ordering(198) 00:17:41.679 fused_ordering(199) 00:17:41.679 fused_ordering(200) 00:17:41.679 fused_ordering(201) 00:17:41.679 fused_ordering(202) 00:17:41.679 fused_ordering(203) 00:17:41.679 fused_ordering(204) 00:17:41.679 fused_ordering(205) 00:17:41.679 fused_ordering(206) 00:17:41.679 fused_ordering(207) 00:17:41.679 fused_ordering(208) 00:17:41.679 fused_ordering(209) 00:17:41.679 fused_ordering(210) 00:17:41.679 fused_ordering(211) 00:17:41.679 fused_ordering(212) 00:17:41.679 fused_ordering(213) 00:17:41.679 fused_ordering(214) 00:17:41.679 fused_ordering(215) 00:17:41.679 fused_ordering(216) 00:17:41.679 fused_ordering(217) 00:17:41.679 fused_ordering(218) 00:17:41.679 fused_ordering(219) 00:17:41.679 fused_ordering(220) 00:17:41.679 fused_ordering(221) 00:17:41.679 fused_ordering(222) 00:17:41.679 fused_ordering(223) 00:17:41.679 fused_ordering(224) 00:17:41.679 fused_ordering(225) 00:17:41.679 fused_ordering(226) 00:17:41.679 fused_ordering(227) 00:17:41.679 fused_ordering(228) 00:17:41.679 fused_ordering(229) 00:17:41.679 fused_ordering(230) 00:17:41.679 fused_ordering(231) 00:17:41.679 fused_ordering(232) 00:17:41.679 fused_ordering(233) 00:17:41.679 fused_ordering(234) 00:17:41.679 fused_ordering(235) 00:17:41.679 fused_ordering(236) 00:17:41.679 fused_ordering(237) 00:17:41.679 fused_ordering(238) 00:17:41.679 fused_ordering(239) 00:17:41.679 fused_ordering(240) 00:17:41.679 fused_ordering(241) 00:17:41.679 fused_ordering(242) 00:17:41.679 fused_ordering(243) 00:17:41.679 fused_ordering(244) 00:17:41.679 fused_ordering(245) 00:17:41.679 fused_ordering(246) 00:17:41.679 fused_ordering(247) 00:17:41.679 fused_ordering(248) 00:17:41.679 fused_ordering(249) 00:17:41.679 fused_ordering(250) 00:17:41.679 fused_ordering(251) 00:17:41.679 fused_ordering(252) 00:17:41.679 fused_ordering(253) 00:17:41.679 fused_ordering(254) 00:17:41.679 fused_ordering(255) 00:17:41.679 fused_ordering(256) 00:17:41.679 fused_ordering(257) 00:17:41.679 fused_ordering(258) 00:17:41.679 fused_ordering(259) 00:17:41.679 fused_ordering(260) 00:17:41.679 fused_ordering(261) 00:17:41.679 fused_ordering(262) 00:17:41.679 fused_ordering(263) 00:17:41.679 fused_ordering(264) 00:17:41.679 fused_ordering(265) 00:17:41.679 fused_ordering(266) 00:17:41.679 fused_ordering(267) 00:17:41.679 fused_ordering(268) 00:17:41.679 fused_ordering(269) 00:17:41.679 fused_ordering(270) 00:17:41.679 fused_ordering(271) 00:17:41.679 fused_ordering(272) 00:17:41.679 fused_ordering(273) 00:17:41.679 fused_ordering(274) 00:17:41.679 fused_ordering(275) 00:17:41.679 fused_ordering(276) 00:17:41.679 fused_ordering(277) 00:17:41.679 fused_ordering(278) 00:17:41.679 fused_ordering(279) 00:17:41.679 fused_ordering(280) 00:17:41.679 fused_ordering(281) 00:17:41.679 fused_ordering(282) 00:17:41.679 fused_ordering(283) 00:17:41.679 fused_ordering(284) 00:17:41.679 fused_ordering(285) 00:17:41.679 fused_ordering(286) 00:17:41.679 fused_ordering(287) 00:17:41.679 fused_ordering(288) 00:17:41.679 fused_ordering(289) 00:17:41.679 fused_ordering(290) 00:17:41.679 fused_ordering(291) 00:17:41.679 fused_ordering(292) 00:17:41.679 fused_ordering(293) 00:17:41.679 fused_ordering(294) 00:17:41.679 fused_ordering(295) 00:17:41.679 fused_ordering(296) 00:17:41.679 fused_ordering(297) 00:17:41.679 fused_ordering(298) 00:17:41.679 fused_ordering(299) 00:17:41.679 fused_ordering(300) 00:17:41.679 fused_ordering(301) 00:17:41.679 fused_ordering(302) 00:17:41.679 fused_ordering(303) 00:17:41.679 fused_ordering(304) 00:17:41.679 fused_ordering(305) 00:17:41.679 fused_ordering(306) 00:17:41.679 fused_ordering(307) 00:17:41.679 fused_ordering(308) 00:17:41.679 fused_ordering(309) 00:17:41.679 fused_ordering(310) 00:17:41.679 fused_ordering(311) 00:17:41.679 fused_ordering(312) 00:17:41.679 fused_ordering(313) 00:17:41.679 fused_ordering(314) 00:17:41.679 fused_ordering(315) 00:17:41.679 fused_ordering(316) 00:17:41.679 fused_ordering(317) 00:17:41.679 fused_ordering(318) 00:17:41.679 fused_ordering(319) 00:17:41.679 fused_ordering(320) 00:17:41.679 fused_ordering(321) 00:17:41.679 fused_ordering(322) 00:17:41.679 fused_ordering(323) 00:17:41.679 fused_ordering(324) 00:17:41.679 fused_ordering(325) 00:17:41.679 fused_ordering(326) 00:17:41.679 fused_ordering(327) 00:17:41.679 fused_ordering(328) 00:17:41.679 fused_ordering(329) 00:17:41.679 fused_ordering(330) 00:17:41.679 fused_ordering(331) 00:17:41.679 fused_ordering(332) 00:17:41.679 fused_ordering(333) 00:17:41.679 fused_ordering(334) 00:17:41.679 fused_ordering(335) 00:17:41.679 fused_ordering(336) 00:17:41.679 fused_ordering(337) 00:17:41.679 fused_ordering(338) 00:17:41.679 fused_ordering(339) 00:17:41.679 fused_ordering(340) 00:17:41.679 fused_ordering(341) 00:17:41.679 fused_ordering(342) 00:17:41.679 fused_ordering(343) 00:17:41.679 fused_ordering(344) 00:17:41.679 fused_ordering(345) 00:17:41.679 fused_ordering(346) 00:17:41.679 fused_ordering(347) 00:17:41.679 fused_ordering(348) 00:17:41.679 fused_ordering(349) 00:17:41.679 fused_ordering(350) 00:17:41.679 fused_ordering(351) 00:17:41.679 fused_ordering(352) 00:17:41.679 fused_ordering(353) 00:17:41.679 fused_ordering(354) 00:17:41.679 fused_ordering(355) 00:17:41.679 fused_ordering(356) 00:17:41.679 fused_ordering(357) 00:17:41.679 fused_ordering(358) 00:17:41.679 fused_ordering(359) 00:17:41.679 fused_ordering(360) 00:17:41.679 fused_ordering(361) 00:17:41.679 fused_ordering(362) 00:17:41.679 fused_ordering(363) 00:17:41.679 fused_ordering(364) 00:17:41.679 fused_ordering(365) 00:17:41.679 fused_ordering(366) 00:17:41.679 fused_ordering(367) 00:17:41.679 fused_ordering(368) 00:17:41.679 fused_ordering(369) 00:17:41.679 fused_ordering(370) 00:17:41.679 fused_ordering(371) 00:17:41.679 fused_ordering(372) 00:17:41.679 fused_ordering(373) 00:17:41.679 fused_ordering(374) 00:17:41.679 fused_ordering(375) 00:17:41.679 fused_ordering(376) 00:17:41.679 fused_ordering(377) 00:17:41.679 fused_ordering(378) 00:17:41.679 fused_ordering(379) 00:17:41.679 fused_ordering(380) 00:17:41.679 fused_ordering(381) 00:17:41.679 fused_ordering(382) 00:17:41.679 fused_ordering(383) 00:17:41.679 fused_ordering(384) 00:17:41.679 fused_ordering(385) 00:17:41.679 fused_ordering(386) 00:17:41.679 fused_ordering(387) 00:17:41.679 fused_ordering(388) 00:17:41.679 fused_ordering(389) 00:17:41.679 fused_ordering(390) 00:17:41.679 fused_ordering(391) 00:17:41.679 fused_ordering(392) 00:17:41.679 fused_ordering(393) 00:17:41.679 fused_ordering(394) 00:17:41.679 fused_ordering(395) 00:17:41.679 fused_ordering(396) 00:17:41.679 fused_ordering(397) 00:17:41.679 fused_ordering(398) 00:17:41.679 fused_ordering(399) 00:17:41.680 fused_ordering(400) 00:17:41.680 fused_ordering(401) 00:17:41.680 fused_ordering(402) 00:17:41.680 fused_ordering(403) 00:17:41.680 fused_ordering(404) 00:17:41.680 fused_ordering(405) 00:17:41.680 fused_ordering(406) 00:17:41.680 fused_ordering(407) 00:17:41.680 fused_ordering(408) 00:17:41.680 fused_ordering(409) 00:17:41.680 fused_ordering(410) 00:17:42.251 fused_ordering(411) 00:17:42.251 fused_ordering(412) 00:17:42.251 fused_ordering(413) 00:17:42.251 fused_ordering(414) 00:17:42.251 fused_ordering(415) 00:17:42.251 fused_ordering(416) 00:17:42.251 fused_ordering(417) 00:17:42.251 fused_ordering(418) 00:17:42.251 fused_ordering(419) 00:17:42.251 fused_ordering(420) 00:17:42.251 fused_ordering(421) 00:17:42.251 fused_ordering(422) 00:17:42.251 fused_ordering(423) 00:17:42.251 fused_ordering(424) 00:17:42.251 fused_ordering(425) 00:17:42.251 fused_ordering(426) 00:17:42.251 fused_ordering(427) 00:17:42.251 fused_ordering(428) 00:17:42.251 fused_ordering(429) 00:17:42.251 fused_ordering(430) 00:17:42.251 fused_ordering(431) 00:17:42.251 fused_ordering(432) 00:17:42.251 fused_ordering(433) 00:17:42.251 fused_ordering(434) 00:17:42.251 fused_ordering(435) 00:17:42.251 fused_ordering(436) 00:17:42.251 fused_ordering(437) 00:17:42.251 fused_ordering(438) 00:17:42.251 fused_ordering(439) 00:17:42.251 fused_ordering(440) 00:17:42.251 fused_ordering(441) 00:17:42.251 fused_ordering(442) 00:17:42.251 fused_ordering(443) 00:17:42.251 fused_ordering(444) 00:17:42.251 fused_ordering(445) 00:17:42.251 fused_ordering(446) 00:17:42.251 fused_ordering(447) 00:17:42.251 fused_ordering(448) 00:17:42.251 fused_ordering(449) 00:17:42.251 fused_ordering(450) 00:17:42.251 fused_ordering(451) 00:17:42.251 fused_ordering(452) 00:17:42.251 fused_ordering(453) 00:17:42.251 fused_ordering(454) 00:17:42.251 fused_ordering(455) 00:17:42.251 fused_ordering(456) 00:17:42.251 fused_ordering(457) 00:17:42.251 fused_ordering(458) 00:17:42.251 fused_ordering(459) 00:17:42.251 fused_ordering(460) 00:17:42.251 fused_ordering(461) 00:17:42.251 fused_ordering(462) 00:17:42.251 fused_ordering(463) 00:17:42.251 fused_ordering(464) 00:17:42.251 fused_ordering(465) 00:17:42.251 fused_ordering(466) 00:17:42.251 fused_ordering(467) 00:17:42.251 fused_ordering(468) 00:17:42.251 fused_ordering(469) 00:17:42.251 fused_ordering(470) 00:17:42.251 fused_ordering(471) 00:17:42.251 fused_ordering(472) 00:17:42.251 fused_ordering(473) 00:17:42.251 fused_ordering(474) 00:17:42.251 fused_ordering(475) 00:17:42.251 fused_ordering(476) 00:17:42.251 fused_ordering(477) 00:17:42.251 fused_ordering(478) 00:17:42.251 fused_ordering(479) 00:17:42.251 fused_ordering(480) 00:17:42.251 fused_ordering(481) 00:17:42.251 fused_ordering(482) 00:17:42.251 fused_ordering(483) 00:17:42.251 fused_ordering(484) 00:17:42.251 fused_ordering(485) 00:17:42.251 fused_ordering(486) 00:17:42.251 fused_ordering(487) 00:17:42.251 fused_ordering(488) 00:17:42.251 fused_ordering(489) 00:17:42.251 fused_ordering(490) 00:17:42.251 fused_ordering(491) 00:17:42.251 fused_ordering(492) 00:17:42.251 fused_ordering(493) 00:17:42.251 fused_ordering(494) 00:17:42.251 fused_ordering(495) 00:17:42.251 fused_ordering(496) 00:17:42.251 fused_ordering(497) 00:17:42.251 fused_ordering(498) 00:17:42.251 fused_ordering(499) 00:17:42.251 fused_ordering(500) 00:17:42.251 fused_ordering(501) 00:17:42.251 fused_ordering(502) 00:17:42.251 fused_ordering(503) 00:17:42.251 fused_ordering(504) 00:17:42.251 fused_ordering(505) 00:17:42.251 fused_ordering(506) 00:17:42.251 fused_ordering(507) 00:17:42.251 fused_ordering(508) 00:17:42.251 fused_ordering(509) 00:17:42.251 fused_ordering(510) 00:17:42.251 fused_ordering(511) 00:17:42.251 fused_ordering(512) 00:17:42.251 fused_ordering(513) 00:17:42.251 fused_ordering(514) 00:17:42.251 fused_ordering(515) 00:17:42.251 fused_ordering(516) 00:17:42.251 fused_ordering(517) 00:17:42.251 fused_ordering(518) 00:17:42.251 fused_ordering(519) 00:17:42.251 fused_ordering(520) 00:17:42.251 fused_ordering(521) 00:17:42.251 fused_ordering(522) 00:17:42.251 fused_ordering(523) 00:17:42.251 fused_ordering(524) 00:17:42.251 fused_ordering(525) 00:17:42.251 fused_ordering(526) 00:17:42.251 fused_ordering(527) 00:17:42.251 fused_ordering(528) 00:17:42.251 fused_ordering(529) 00:17:42.251 fused_ordering(530) 00:17:42.251 fused_ordering(531) 00:17:42.251 fused_ordering(532) 00:17:42.251 fused_ordering(533) 00:17:42.251 fused_ordering(534) 00:17:42.251 fused_ordering(535) 00:17:42.251 fused_ordering(536) 00:17:42.251 fused_ordering(537) 00:17:42.251 fused_ordering(538) 00:17:42.251 fused_ordering(539) 00:17:42.251 fused_ordering(540) 00:17:42.251 fused_ordering(541) 00:17:42.251 fused_ordering(542) 00:17:42.251 fused_ordering(543) 00:17:42.251 fused_ordering(544) 00:17:42.251 fused_ordering(545) 00:17:42.251 fused_ordering(546) 00:17:42.251 fused_ordering(547) 00:17:42.251 fused_ordering(548) 00:17:42.251 fused_ordering(549) 00:17:42.251 fused_ordering(550) 00:17:42.251 fused_ordering(551) 00:17:42.251 fused_ordering(552) 00:17:42.251 fused_ordering(553) 00:17:42.251 fused_ordering(554) 00:17:42.251 fused_ordering(555) 00:17:42.251 fused_ordering(556) 00:17:42.251 fused_ordering(557) 00:17:42.251 fused_ordering(558) 00:17:42.251 fused_ordering(559) 00:17:42.251 fused_ordering(560) 00:17:42.251 fused_ordering(561) 00:17:42.251 fused_ordering(562) 00:17:42.251 fused_ordering(563) 00:17:42.251 fused_ordering(564) 00:17:42.251 fused_ordering(565) 00:17:42.251 fused_ordering(566) 00:17:42.251 fused_ordering(567) 00:17:42.251 fused_ordering(568) 00:17:42.251 fused_ordering(569) 00:17:42.251 fused_ordering(570) 00:17:42.251 fused_ordering(571) 00:17:42.251 fused_ordering(572) 00:17:42.251 fused_ordering(573) 00:17:42.251 fused_ordering(574) 00:17:42.251 fused_ordering(575) 00:17:42.251 fused_ordering(576) 00:17:42.251 fused_ordering(577) 00:17:42.251 fused_ordering(578) 00:17:42.251 fused_ordering(579) 00:17:42.251 fused_ordering(580) 00:17:42.251 fused_ordering(581) 00:17:42.251 fused_ordering(582) 00:17:42.251 fused_ordering(583) 00:17:42.251 fused_ordering(584) 00:17:42.251 fused_ordering(585) 00:17:42.252 fused_ordering(586) 00:17:42.252 fused_ordering(587) 00:17:42.252 fused_ordering(588) 00:17:42.252 fused_ordering(589) 00:17:42.252 fused_ordering(590) 00:17:42.252 fused_ordering(591) 00:17:42.252 fused_ordering(592) 00:17:42.252 fused_ordering(593) 00:17:42.252 fused_ordering(594) 00:17:42.252 fused_ordering(595) 00:17:42.252 fused_ordering(596) 00:17:42.252 fused_ordering(597) 00:17:42.252 fused_ordering(598) 00:17:42.252 fused_ordering(599) 00:17:42.252 fused_ordering(600) 00:17:42.252 fused_ordering(601) 00:17:42.252 fused_ordering(602) 00:17:42.252 fused_ordering(603) 00:17:42.252 fused_ordering(604) 00:17:42.252 fused_ordering(605) 00:17:42.252 fused_ordering(606) 00:17:42.252 fused_ordering(607) 00:17:42.252 fused_ordering(608) 00:17:42.252 fused_ordering(609) 00:17:42.252 fused_ordering(610) 00:17:42.252 fused_ordering(611) 00:17:42.252 fused_ordering(612) 00:17:42.252 fused_ordering(613) 00:17:42.252 fused_ordering(614) 00:17:42.252 fused_ordering(615) 00:17:42.511 fused_ordering(616) 00:17:42.511 fused_ordering(617) 00:17:42.511 fused_ordering(618) 00:17:42.511 fused_ordering(619) 00:17:42.511 fused_ordering(620) 00:17:42.511 fused_ordering(621) 00:17:42.511 fused_ordering(622) 00:17:42.511 fused_ordering(623) 00:17:42.511 fused_ordering(624) 00:17:42.511 fused_ordering(625) 00:17:42.511 fused_ordering(626) 00:17:42.511 fused_ordering(627) 00:17:42.511 fused_ordering(628) 00:17:42.511 fused_ordering(629) 00:17:42.511 fused_ordering(630) 00:17:42.511 fused_ordering(631) 00:17:42.511 fused_ordering(632) 00:17:42.511 fused_ordering(633) 00:17:42.511 fused_ordering(634) 00:17:42.511 fused_ordering(635) 00:17:42.511 fused_ordering(636) 00:17:42.511 fused_ordering(637) 00:17:42.511 fused_ordering(638) 00:17:42.511 fused_ordering(639) 00:17:42.511 fused_ordering(640) 00:17:42.511 fused_ordering(641) 00:17:42.511 fused_ordering(642) 00:17:42.511 fused_ordering(643) 00:17:42.511 fused_ordering(644) 00:17:42.511 fused_ordering(645) 00:17:42.511 fused_ordering(646) 00:17:42.511 fused_ordering(647) 00:17:42.511 fused_ordering(648) 00:17:42.511 fused_ordering(649) 00:17:42.511 fused_ordering(650) 00:17:42.511 fused_ordering(651) 00:17:42.511 fused_ordering(652) 00:17:42.511 fused_ordering(653) 00:17:42.511 fused_ordering(654) 00:17:42.511 fused_ordering(655) 00:17:42.511 fused_ordering(656) 00:17:42.511 fused_ordering(657) 00:17:42.511 fused_ordering(658) 00:17:42.511 fused_ordering(659) 00:17:42.511 fused_ordering(660) 00:17:42.511 fused_ordering(661) 00:17:42.511 fused_ordering(662) 00:17:42.511 fused_ordering(663) 00:17:42.511 fused_ordering(664) 00:17:42.511 fused_ordering(665) 00:17:42.511 fused_ordering(666) 00:17:42.511 fused_ordering(667) 00:17:42.511 fused_ordering(668) 00:17:42.511 fused_ordering(669) 00:17:42.511 fused_ordering(670) 00:17:42.511 fused_ordering(671) 00:17:42.511 fused_ordering(672) 00:17:42.511 fused_ordering(673) 00:17:42.511 fused_ordering(674) 00:17:42.511 fused_ordering(675) 00:17:42.511 fused_ordering(676) 00:17:42.511 fused_ordering(677) 00:17:42.511 fused_ordering(678) 00:17:42.511 fused_ordering(679) 00:17:42.511 fused_ordering(680) 00:17:42.511 fused_ordering(681) 00:17:42.511 fused_ordering(682) 00:17:42.511 fused_ordering(683) 00:17:42.511 fused_ordering(684) 00:17:42.511 fused_ordering(685) 00:17:42.511 fused_ordering(686) 00:17:42.511 fused_ordering(687) 00:17:42.511 fused_ordering(688) 00:17:42.511 fused_ordering(689) 00:17:42.511 fused_ordering(690) 00:17:42.511 fused_ordering(691) 00:17:42.511 fused_ordering(692) 00:17:42.511 fused_ordering(693) 00:17:42.511 fused_ordering(694) 00:17:42.511 fused_ordering(695) 00:17:42.511 fused_ordering(696) 00:17:42.511 fused_ordering(697) 00:17:42.511 fused_ordering(698) 00:17:42.511 fused_ordering(699) 00:17:42.511 fused_ordering(700) 00:17:42.511 fused_ordering(701) 00:17:42.511 fused_ordering(702) 00:17:42.511 fused_ordering(703) 00:17:42.511 fused_ordering(704) 00:17:42.511 fused_ordering(705) 00:17:42.511 fused_ordering(706) 00:17:42.511 fused_ordering(707) 00:17:42.511 fused_ordering(708) 00:17:42.511 fused_ordering(709) 00:17:42.511 fused_ordering(710) 00:17:42.511 fused_ordering(711) 00:17:42.511 fused_ordering(712) 00:17:42.511 fused_ordering(713) 00:17:42.511 fused_ordering(714) 00:17:42.511 fused_ordering(715) 00:17:42.511 fused_ordering(716) 00:17:42.511 fused_ordering(717) 00:17:42.511 fused_ordering(718) 00:17:42.511 fused_ordering(719) 00:17:42.511 fused_ordering(720) 00:17:42.511 fused_ordering(721) 00:17:42.511 fused_ordering(722) 00:17:42.511 fused_ordering(723) 00:17:42.511 fused_ordering(724) 00:17:42.511 fused_ordering(725) 00:17:42.511 fused_ordering(726) 00:17:42.511 fused_ordering(727) 00:17:42.511 fused_ordering(728) 00:17:42.511 fused_ordering(729) 00:17:42.511 fused_ordering(730) 00:17:42.511 fused_ordering(731) 00:17:42.511 fused_ordering(732) 00:17:42.511 fused_ordering(733) 00:17:42.511 fused_ordering(734) 00:17:42.511 fused_ordering(735) 00:17:42.511 fused_ordering(736) 00:17:42.511 fused_ordering(737) 00:17:42.511 fused_ordering(738) 00:17:42.511 fused_ordering(739) 00:17:42.511 fused_ordering(740) 00:17:42.511 fused_ordering(741) 00:17:42.511 fused_ordering(742) 00:17:42.511 fused_ordering(743) 00:17:42.511 fused_ordering(744) 00:17:42.511 fused_ordering(745) 00:17:42.511 fused_ordering(746) 00:17:42.511 fused_ordering(747) 00:17:42.511 fused_ordering(748) 00:17:42.511 fused_ordering(749) 00:17:42.511 fused_ordering(750) 00:17:42.511 fused_ordering(751) 00:17:42.511 fused_ordering(752) 00:17:42.511 fused_ordering(753) 00:17:42.511 fused_ordering(754) 00:17:42.511 fused_ordering(755) 00:17:42.511 fused_ordering(756) 00:17:42.511 fused_ordering(757) 00:17:42.511 fused_ordering(758) 00:17:42.511 fused_ordering(759) 00:17:42.511 fused_ordering(760) 00:17:42.511 fused_ordering(761) 00:17:42.511 fused_ordering(762) 00:17:42.511 fused_ordering(763) 00:17:42.511 fused_ordering(764) 00:17:42.511 fused_ordering(765) 00:17:42.511 fused_ordering(766) 00:17:42.511 fused_ordering(767) 00:17:42.511 fused_ordering(768) 00:17:42.511 fused_ordering(769) 00:17:42.511 fused_ordering(770) 00:17:42.511 fused_ordering(771) 00:17:42.511 fused_ordering(772) 00:17:42.511 fused_ordering(773) 00:17:42.511 fused_ordering(774) 00:17:42.511 fused_ordering(775) 00:17:42.511 fused_ordering(776) 00:17:42.511 fused_ordering(777) 00:17:42.511 fused_ordering(778) 00:17:42.511 fused_ordering(779) 00:17:42.511 fused_ordering(780) 00:17:42.511 fused_ordering(781) 00:17:42.511 fused_ordering(782) 00:17:42.511 fused_ordering(783) 00:17:42.511 fused_ordering(784) 00:17:42.511 fused_ordering(785) 00:17:42.511 fused_ordering(786) 00:17:42.511 fused_ordering(787) 00:17:42.511 fused_ordering(788) 00:17:42.511 fused_ordering(789) 00:17:42.511 fused_ordering(790) 00:17:42.511 fused_ordering(791) 00:17:42.511 fused_ordering(792) 00:17:42.511 fused_ordering(793) 00:17:42.511 fused_ordering(794) 00:17:42.511 fused_ordering(795) 00:17:42.511 fused_ordering(796) 00:17:42.511 fused_ordering(797) 00:17:42.511 fused_ordering(798) 00:17:42.511 fused_ordering(799) 00:17:42.511 fused_ordering(800) 00:17:42.511 fused_ordering(801) 00:17:42.511 fused_ordering(802) 00:17:42.511 fused_ordering(803) 00:17:42.511 fused_ordering(804) 00:17:42.511 fused_ordering(805) 00:17:42.511 fused_ordering(806) 00:17:42.511 fused_ordering(807) 00:17:42.511 fused_ordering(808) 00:17:42.511 fused_ordering(809) 00:17:42.511 fused_ordering(810) 00:17:42.511 fused_ordering(811) 00:17:42.511 fused_ordering(812) 00:17:42.511 fused_ordering(813) 00:17:42.511 fused_ordering(814) 00:17:42.511 fused_ordering(815) 00:17:42.511 fused_ordering(816) 00:17:42.511 fused_ordering(817) 00:17:42.511 fused_ordering(818) 00:17:42.511 fused_ordering(819) 00:17:42.511 fused_ordering(820) 00:17:43.447 fused_ordering(821) 00:17:43.447 fused_ordering(822) 00:17:43.447 fused_ordering(823) 00:17:43.447 fused_ordering(824) 00:17:43.447 fused_ordering(825) 00:17:43.447 fused_ordering(826) 00:17:43.447 fused_ordering(827) 00:17:43.447 fused_ordering(828) 00:17:43.447 fused_ordering(829) 00:17:43.447 fused_ordering(830) 00:17:43.447 fused_ordering(831) 00:17:43.447 fused_ordering(832) 00:17:43.447 fused_ordering(833) 00:17:43.447 fused_ordering(834) 00:17:43.447 fused_ordering(835) 00:17:43.447 fused_ordering(836) 00:17:43.447 fused_ordering(837) 00:17:43.447 fused_ordering(838) 00:17:43.447 fused_ordering(839) 00:17:43.447 fused_ordering(840) 00:17:43.447 fused_ordering(841) 00:17:43.447 fused_ordering(842) 00:17:43.447 fused_ordering(843) 00:17:43.447 fused_ordering(844) 00:17:43.447 fused_ordering(845) 00:17:43.447 fused_ordering(846) 00:17:43.447 fused_ordering(847) 00:17:43.447 fused_ordering(848) 00:17:43.447 fused_ordering(849) 00:17:43.447 fused_ordering(850) 00:17:43.447 fused_ordering(851) 00:17:43.447 fused_ordering(852) 00:17:43.447 fused_ordering(853) 00:17:43.447 fused_ordering(854) 00:17:43.447 fused_ordering(855) 00:17:43.447 fused_ordering(856) 00:17:43.447 fused_ordering(857) 00:17:43.447 fused_ordering(858) 00:17:43.447 fused_ordering(859) 00:17:43.447 fused_ordering(860) 00:17:43.447 fused_ordering(861) 00:17:43.447 fused_ordering(862) 00:17:43.447 fused_ordering(863) 00:17:43.447 fused_ordering(864) 00:17:43.447 fused_ordering(865) 00:17:43.447 fused_ordering(866) 00:17:43.447 fused_ordering(867) 00:17:43.447 fused_ordering(868) 00:17:43.447 fused_ordering(869) 00:17:43.447 fused_ordering(870) 00:17:43.447 fused_ordering(871) 00:17:43.447 fused_ordering(872) 00:17:43.447 fused_ordering(873) 00:17:43.447 fused_ordering(874) 00:17:43.447 fused_ordering(875) 00:17:43.447 fused_ordering(876) 00:17:43.447 fused_ordering(877) 00:17:43.447 fused_ordering(878) 00:17:43.448 fused_ordering(879) 00:17:43.448 fused_ordering(880) 00:17:43.448 fused_ordering(881) 00:17:43.448 fused_ordering(882) 00:17:43.448 fused_ordering(883) 00:17:43.448 fused_ordering(884) 00:17:43.448 fused_ordering(885) 00:17:43.448 fused_ordering(886) 00:17:43.448 fused_ordering(887) 00:17:43.448 fused_ordering(888) 00:17:43.448 fused_ordering(889) 00:17:43.448 fused_ordering(890) 00:17:43.448 fused_ordering(891) 00:17:43.448 fused_ordering(892) 00:17:43.448 fused_ordering(893) 00:17:43.448 fused_ordering(894) 00:17:43.448 fused_ordering(895) 00:17:43.448 fused_ordering(896) 00:17:43.448 fused_ordering(897) 00:17:43.448 fused_ordering(898) 00:17:43.448 fused_ordering(899) 00:17:43.448 fused_ordering(900) 00:17:43.448 fused_ordering(901) 00:17:43.448 fused_ordering(902) 00:17:43.448 fused_ordering(903) 00:17:43.448 fused_ordering(904) 00:17:43.448 fused_ordering(905) 00:17:43.448 fused_ordering(906) 00:17:43.448 fused_ordering(907) 00:17:43.448 fused_ordering(908) 00:17:43.448 fused_ordering(909) 00:17:43.448 fused_ordering(910) 00:17:43.448 fused_ordering(911) 00:17:43.448 fused_ordering(912) 00:17:43.448 fused_ordering(913) 00:17:43.448 fused_ordering(914) 00:17:43.448 fused_ordering(915) 00:17:43.448 fused_ordering(916) 00:17:43.448 fused_ordering(917) 00:17:43.448 fused_ordering(918) 00:17:43.448 fused_ordering(919) 00:17:43.448 fused_ordering(920) 00:17:43.448 fused_ordering(921) 00:17:43.448 fused_ordering(922) 00:17:43.448 fused_ordering(923) 00:17:43.448 fused_ordering(924) 00:17:43.448 fused_ordering(925) 00:17:43.448 fused_ordering(926) 00:17:43.448 fused_ordering(927) 00:17:43.448 fused_ordering(928) 00:17:43.448 fused_ordering(929) 00:17:43.448 fused_ordering(930) 00:17:43.448 fused_ordering(931) 00:17:43.448 fused_ordering(932) 00:17:43.448 fused_ordering(933) 00:17:43.448 fused_ordering(934) 00:17:43.448 fused_ordering(935) 00:17:43.448 fused_ordering(936) 00:17:43.448 fused_ordering(937) 00:17:43.448 fused_ordering(938) 00:17:43.448 fused_ordering(939) 00:17:43.448 fused_ordering(940) 00:17:43.448 fused_ordering(941) 00:17:43.448 fused_ordering(942) 00:17:43.448 fused_ordering(943) 00:17:43.448 fused_ordering(944) 00:17:43.448 fused_ordering(945) 00:17:43.448 fused_ordering(946) 00:17:43.448 fused_ordering(947) 00:17:43.448 fused_ordering(948) 00:17:43.448 fused_ordering(949) 00:17:43.448 fused_ordering(950) 00:17:43.448 fused_ordering(951) 00:17:43.448 fused_ordering(952) 00:17:43.448 fused_ordering(953) 00:17:43.448 fused_ordering(954) 00:17:43.448 fused_ordering(955) 00:17:43.448 fused_ordering(956) 00:17:43.448 fused_ordering(957) 00:17:43.448 fused_ordering(958) 00:17:43.448 fused_ordering(959) 00:17:43.448 fused_ordering(960) 00:17:43.448 fused_ordering(961) 00:17:43.448 fused_ordering(962) 00:17:43.448 fused_ordering(963) 00:17:43.448 fused_ordering(964) 00:17:43.448 fused_ordering(965) 00:17:43.448 fused_ordering(966) 00:17:43.448 fused_ordering(967) 00:17:43.448 fused_ordering(968) 00:17:43.448 fused_ordering(969) 00:17:43.448 fused_ordering(970) 00:17:43.448 fused_ordering(971) 00:17:43.448 fused_ordering(972) 00:17:43.448 fused_ordering(973) 00:17:43.448 fused_ordering(974) 00:17:43.448 fused_ordering(975) 00:17:43.448 fused_ordering(976) 00:17:43.448 fused_ordering(977) 00:17:43.448 fused_ordering(978) 00:17:43.448 fused_ordering(979) 00:17:43.448 fused_ordering(980) 00:17:43.448 fused_ordering(981) 00:17:43.448 fused_ordering(982) 00:17:43.448 fused_ordering(983) 00:17:43.448 fused_ordering(984) 00:17:43.448 fused_ordering(985) 00:17:43.448 fused_ordering(986) 00:17:43.448 fused_ordering(987) 00:17:43.448 fused_ordering(988) 00:17:43.448 fused_ordering(989) 00:17:43.448 fused_ordering(990) 00:17:43.448 fused_ordering(991) 00:17:43.448 fused_ordering(992) 00:17:43.448 fused_ordering(993) 00:17:43.448 fused_ordering(994) 00:17:43.448 fused_ordering(995) 00:17:43.448 fused_ordering(996) 00:17:43.448 fused_ordering(997) 00:17:43.448 fused_ordering(998) 00:17:43.448 fused_ordering(999) 00:17:43.448 fused_ordering(1000) 00:17:43.448 fused_ordering(1001) 00:17:43.448 fused_ordering(1002) 00:17:43.448 fused_ordering(1003) 00:17:43.448 fused_ordering(1004) 00:17:43.448 fused_ordering(1005) 00:17:43.448 fused_ordering(1006) 00:17:43.448 fused_ordering(1007) 00:17:43.448 fused_ordering(1008) 00:17:43.448 fused_ordering(1009) 00:17:43.448 fused_ordering(1010) 00:17:43.448 fused_ordering(1011) 00:17:43.448 fused_ordering(1012) 00:17:43.448 fused_ordering(1013) 00:17:43.448 fused_ordering(1014) 00:17:43.448 fused_ordering(1015) 00:17:43.448 fused_ordering(1016) 00:17:43.448 fused_ordering(1017) 00:17:43.448 fused_ordering(1018) 00:17:43.448 fused_ordering(1019) 00:17:43.448 fused_ordering(1020) 00:17:43.448 fused_ordering(1021) 00:17:43.448 fused_ordering(1022) 00:17:43.448 fused_ordering(1023) 00:17:43.448 14:00:22 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:43.448 14:00:22 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:43.448 14:00:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:43.448 14:00:22 -- nvmf/common.sh@117 -- # sync 00:17:43.448 14:00:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:43.448 14:00:22 -- nvmf/common.sh@120 -- # set +e 00:17:43.448 14:00:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:43.448 14:00:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:43.449 rmmod nvme_tcp 00:17:43.449 rmmod nvme_fabrics 00:17:43.449 rmmod nvme_keyring 00:17:43.449 14:00:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:43.449 14:00:22 -- nvmf/common.sh@124 -- # set -e 00:17:43.449 14:00:22 -- nvmf/common.sh@125 -- # return 0 00:17:43.449 14:00:22 -- nvmf/common.sh@478 -- # '[' -n 71682 ']' 00:17:43.449 14:00:22 -- nvmf/common.sh@479 -- # killprocess 71682 00:17:43.449 14:00:22 -- common/autotest_common.sh@936 -- # '[' -z 71682 ']' 00:17:43.449 14:00:22 -- common/autotest_common.sh@940 -- # kill -0 71682 00:17:43.449 14:00:22 -- common/autotest_common.sh@941 -- # uname 00:17:43.449 14:00:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:43.449 14:00:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71682 00:17:43.449 14:00:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:43.449 14:00:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:43.449 killing process with pid 71682 00:17:43.449 14:00:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71682' 00:17:43.449 14:00:22 -- common/autotest_common.sh@955 -- # kill 71682 00:17:43.449 14:00:22 -- common/autotest_common.sh@960 -- # wait 71682 00:17:44.833 14:00:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:44.833 14:00:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:44.833 14:00:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:44.833 14:00:24 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:44.833 14:00:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:44.833 14:00:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.833 14:00:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.833 14:00:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.833 14:00:24 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:44.833 00:17:44.833 real 0m5.396s 00:17:44.833 user 0m5.994s 00:17:44.833 sys 0m1.666s 00:17:44.833 ************************************ 00:17:44.833 END TEST nvmf_fused_ordering 00:17:44.833 ************************************ 00:17:44.833 14:00:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:44.833 14:00:24 -- common/autotest_common.sh@10 -- # set +x 00:17:44.833 14:00:24 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:17:44.833 14:00:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:44.833 14:00:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:44.833 14:00:24 -- common/autotest_common.sh@10 -- # set +x 00:17:44.833 ************************************ 00:17:44.833 START TEST nvmf_delete_subsystem 00:17:44.833 ************************************ 00:17:44.833 14:00:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:17:45.090 * Looking for test storage... 00:17:45.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:45.091 14:00:24 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:45.091 14:00:24 -- nvmf/common.sh@7 -- # uname -s 00:17:45.091 14:00:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.091 14:00:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.091 14:00:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.091 14:00:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.091 14:00:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.091 14:00:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.091 14:00:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.091 14:00:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.091 14:00:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.091 14:00:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.091 14:00:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:17:45.091 14:00:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:17:45.091 14:00:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.091 14:00:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.091 14:00:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:45.091 14:00:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.091 14:00:24 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:45.091 14:00:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.091 14:00:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.091 14:00:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.091 14:00:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.091 14:00:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.091 14:00:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.091 14:00:24 -- paths/export.sh@5 -- # export PATH 00:17:45.091 14:00:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.091 14:00:24 -- nvmf/common.sh@47 -- # : 0 00:17:45.091 14:00:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:45.091 14:00:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:45.091 14:00:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.091 14:00:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.091 14:00:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.091 14:00:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:45.091 14:00:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:45.091 14:00:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:45.091 14:00:24 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:17:45.091 14:00:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:45.091 14:00:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.091 14:00:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:45.091 14:00:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:45.091 14:00:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:45.091 14:00:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.091 14:00:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.091 14:00:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.091 14:00:24 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:45.091 14:00:24 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:45.091 14:00:24 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:45.091 14:00:24 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:45.091 14:00:24 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:45.091 14:00:24 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:45.091 14:00:24 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.091 14:00:24 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:45.091 14:00:24 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:45.091 14:00:24 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:45.091 14:00:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:45.091 14:00:24 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:45.091 14:00:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:45.091 14:00:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.091 14:00:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:45.091 14:00:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:45.091 14:00:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:45.091 14:00:24 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:45.091 14:00:24 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:45.091 14:00:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:45.091 Cannot find device "nvmf_tgt_br" 00:17:45.091 14:00:24 -- nvmf/common.sh@155 -- # true 00:17:45.091 14:00:24 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:45.091 Cannot find device "nvmf_tgt_br2" 00:17:45.091 14:00:24 -- nvmf/common.sh@156 -- # true 00:17:45.091 14:00:24 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:45.091 14:00:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:45.091 Cannot find device "nvmf_tgt_br" 00:17:45.091 14:00:24 -- nvmf/common.sh@158 -- # true 00:17:45.091 14:00:24 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:45.091 Cannot find device "nvmf_tgt_br2" 00:17:45.091 14:00:24 -- nvmf/common.sh@159 -- # true 00:17:45.091 14:00:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:45.349 14:00:24 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:45.349 14:00:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:45.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:45.349 14:00:24 -- nvmf/common.sh@162 -- # true 00:17:45.349 14:00:24 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:45.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:45.349 14:00:24 -- nvmf/common.sh@163 -- # true 00:17:45.349 14:00:24 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:45.349 14:00:24 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:45.349 14:00:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:45.349 14:00:24 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:45.349 14:00:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:45.349 14:00:24 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:45.349 14:00:24 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:45.349 14:00:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:45.349 14:00:24 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:45.349 14:00:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:45.349 14:00:24 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:45.349 14:00:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:45.349 14:00:24 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:45.349 14:00:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:45.349 14:00:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:45.349 14:00:24 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:45.349 14:00:24 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:45.349 14:00:24 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:45.349 14:00:24 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:45.349 14:00:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:45.349 14:00:24 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:45.349 14:00:25 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:45.349 14:00:25 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:45.349 14:00:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:45.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:45.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:17:45.607 00:17:45.607 --- 10.0.0.2 ping statistics --- 00:17:45.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.607 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:45.607 14:00:25 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:45.607 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:45.607 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:17:45.607 00:17:45.607 --- 10.0.0.3 ping statistics --- 00:17:45.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.607 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:45.607 14:00:25 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:45.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:45.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:17:45.607 00:17:45.607 --- 10.0.0.1 ping statistics --- 00:17:45.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:45.607 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:17:45.607 14:00:25 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:45.607 14:00:25 -- nvmf/common.sh@422 -- # return 0 00:17:45.607 14:00:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:45.607 14:00:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:45.607 14:00:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:45.607 14:00:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:45.607 14:00:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:45.607 14:00:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:45.607 14:00:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:45.607 14:00:25 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:17:45.607 14:00:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:45.607 14:00:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:45.607 14:00:25 -- common/autotest_common.sh@10 -- # set +x 00:17:45.607 14:00:25 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:45.607 14:00:25 -- nvmf/common.sh@470 -- # nvmfpid=71963 00:17:45.607 14:00:25 -- nvmf/common.sh@471 -- # waitforlisten 71963 00:17:45.607 14:00:25 -- common/autotest_common.sh@817 -- # '[' -z 71963 ']' 00:17:45.607 14:00:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.607 14:00:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:45.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.607 14:00:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.607 14:00:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:45.607 14:00:25 -- common/autotest_common.sh@10 -- # set +x 00:17:45.607 [2024-04-26 14:00:25.169286] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:45.607 [2024-04-26 14:00:25.169424] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.913 [2024-04-26 14:00:25.344946] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:46.172 [2024-04-26 14:00:25.589367] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.172 [2024-04-26 14:00:25.589444] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.172 [2024-04-26 14:00:25.589460] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.172 [2024-04-26 14:00:25.589484] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.172 [2024-04-26 14:00:25.589499] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.172 [2024-04-26 14:00:25.589736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.172 [2024-04-26 14:00:25.589770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.430 14:00:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:46.430 14:00:26 -- common/autotest_common.sh@850 -- # return 0 00:17:46.430 14:00:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:46.430 14:00:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:46.430 14:00:26 -- common/autotest_common.sh@10 -- # set +x 00:17:46.430 14:00:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.430 14:00:26 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:46.430 14:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.430 14:00:26 -- common/autotest_common.sh@10 -- # set +x 00:17:46.430 [2024-04-26 14:00:26.087001] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.430 14:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:46.430 14:00:26 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:46.430 14:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.430 14:00:26 -- common/autotest_common.sh@10 -- # set +x 00:17:46.688 14:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:46.688 14:00:26 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:46.688 14:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.688 14:00:26 -- common/autotest_common.sh@10 -- # set +x 00:17:46.688 [2024-04-26 14:00:26.112373] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.688 14:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:46.688 14:00:26 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:46.688 14:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.688 14:00:26 -- common/autotest_common.sh@10 -- # set +x 00:17:46.688 NULL1 00:17:46.688 14:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:46.688 14:00:26 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:46.688 14:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.688 14:00:26 -- common/autotest_common.sh@10 -- # set +x 00:17:46.688 Delay0 00:17:46.688 14:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:46.688 14:00:26 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:46.688 14:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.688 14:00:26 -- common/autotest_common.sh@10 -- # set +x 00:17:46.688 14:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:46.688 14:00:26 -- target/delete_subsystem.sh@28 -- # perf_pid=72014 00:17:46.688 14:00:26 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:46.688 14:00:26 -- target/delete_subsystem.sh@30 -- # sleep 2 00:17:46.947 [2024-04-26 14:00:26.380026] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:48.866 14:00:28 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:48.866 14:00:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:48.866 14:00:28 -- common/autotest_common.sh@10 -- # set +x 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 starting I/O failed: -6 00:17:48.866 Write completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 starting I/O failed: -6 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 starting I/O failed: -6 00:17:48.866 Write completed with error (sct=0, sc=8) 00:17:48.866 Write completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 starting I/O failed: -6 00:17:48.866 Write completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 starting I/O failed: -6 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Write completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 starting I/O failed: -6 00:17:48.866 Write completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 starting I/O failed: -6 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Write completed with error (sct=0, sc=8) 00:17:48.866 Write completed with error (sct=0, sc=8) 00:17:48.866 Write completed with error (sct=0, sc=8) 00:17:48.866 starting I/O failed: -6 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 starting I/O failed: -6 00:17:48.866 Write completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 starting I/O failed: -6 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.866 Write completed with error (sct=0, sc=8) 00:17:48.866 Read completed with error (sct=0, sc=8) 00:17:48.867 [2024-04-26 14:00:28.428978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002440 is same with the state(5) to be set 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 starting I/O failed: -6 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 starting I/O failed: -6 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 starting I/O failed: -6 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 starting I/O failed: -6 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 starting I/O failed: -6 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 starting I/O failed: -6 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 starting I/O failed: -6 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 starting I/O failed: -6 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 starting I/O failed: -6 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 starting I/O failed: -6 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 [2024-04-26 14:00:28.433070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010040 is same with the state(5) to be set 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Write completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:48.867 Read completed with error (sct=0, sc=8) 00:17:49.805 [2024-04-26 14:00:29.397203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002240 is same with the state(5) to be set 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Write completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Write completed with error (sct=0, sc=8) 00:17:49.805 Write completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Write completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Write completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 [2024-04-26 14:00:29.418182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002a40 is same with the state(5) to be set 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Write completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Write completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Write completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Write completed with error (sct=0, sc=8) 00:17:49.805 Write completed with error (sct=0, sc=8) 00:17:49.805 [2024-04-26 14:00:29.425668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010240 is same with the state(5) to be set 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Write completed with error (sct=0, sc=8) 00:17:49.805 Write completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 [2024-04-26 14:00:29.428248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002640 is same with the state(5) to be set 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Write completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Write completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Write completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.805 Read completed with error (sct=0, sc=8) 00:17:49.806 Read completed with error (sct=0, sc=8) 00:17:49.806 Write completed with error (sct=0, sc=8) 00:17:49.806 Read completed with error (sct=0, sc=8) 00:17:49.806 Read completed with error (sct=0, sc=8) 00:17:49.806 Read completed with error (sct=0, sc=8) 00:17:49.806 Write completed with error (sct=0, sc=8) 00:17:49.806 Read completed with error (sct=0, sc=8) 00:17:49.806 14:00:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.806 14:00:29 -- target/delete_subsystem.sh@34 -- # delay=0 00:17:49.806 14:00:29 -- target/delete_subsystem.sh@35 -- # kill -0 72014 00:17:49.806 14:00:29 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:49.806 Read completed with error (sct=0, sc=8) 00:17:49.806 Read completed with error (sct=0, sc=8) 00:17:49.806 Read completed with error (sct=0, sc=8) 00:17:49.806 Read completed with error (sct=0, sc=8) 00:17:49.806 [2024-04-26 14:00:29.434645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010640 is same with the state(5) to be set 00:17:49.806 [2024-04-26 14:00:29.436725] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000002240 (9): Bad file descriptor 00:17:49.806 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:17:49.806 Initializing NVMe Controllers 00:17:49.806 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:49.806 Controller IO queue size 128, less than required. 00:17:49.806 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:49.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:49.806 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:49.806 Initialization complete. Launching workers. 00:17:49.806 ======================================================== 00:17:49.806 Latency(us) 00:17:49.806 Device Information : IOPS MiB/s Average min max 00:17:49.806 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 157.79 0.08 924569.30 613.72 1012603.45 00:17:49.806 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.74 0.08 910849.02 2010.13 1014840.09 00:17:49.806 ======================================================== 00:17:49.806 Total : 321.54 0.16 917582.12 613.72 1014840.09 00:17:49.806 00:17:50.378 14:00:29 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:50.378 14:00:29 -- target/delete_subsystem.sh@35 -- # kill -0 72014 00:17:50.378 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (72014) - No such process 00:17:50.378 14:00:29 -- target/delete_subsystem.sh@45 -- # NOT wait 72014 00:17:50.378 14:00:29 -- common/autotest_common.sh@638 -- # local es=0 00:17:50.378 14:00:29 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 72014 00:17:50.378 14:00:29 -- common/autotest_common.sh@626 -- # local arg=wait 00:17:50.378 14:00:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:50.378 14:00:29 -- common/autotest_common.sh@630 -- # type -t wait 00:17:50.378 14:00:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:50.378 14:00:29 -- common/autotest_common.sh@641 -- # wait 72014 00:17:50.378 14:00:29 -- common/autotest_common.sh@641 -- # es=1 00:17:50.378 14:00:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:50.378 14:00:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:50.378 14:00:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:50.378 14:00:29 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:50.378 14:00:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.378 14:00:29 -- common/autotest_common.sh@10 -- # set +x 00:17:50.378 14:00:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.378 14:00:29 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:50.378 14:00:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.378 14:00:29 -- common/autotest_common.sh@10 -- # set +x 00:17:50.378 [2024-04-26 14:00:29.966559] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.378 14:00:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.378 14:00:29 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:50.378 14:00:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.378 14:00:29 -- common/autotest_common.sh@10 -- # set +x 00:17:50.378 14:00:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.378 14:00:29 -- target/delete_subsystem.sh@54 -- # perf_pid=72061 00:17:50.378 14:00:29 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:50.378 14:00:29 -- target/delete_subsystem.sh@56 -- # delay=0 00:17:50.378 14:00:29 -- target/delete_subsystem.sh@57 -- # kill -0 72061 00:17:50.378 14:00:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:50.635 [2024-04-26 14:00:30.221244] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:50.893 14:00:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:50.893 14:00:30 -- target/delete_subsystem.sh@57 -- # kill -0 72061 00:17:50.893 14:00:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:51.456 14:00:30 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:51.456 14:00:30 -- target/delete_subsystem.sh@57 -- # kill -0 72061 00:17:51.456 14:00:30 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:52.025 14:00:31 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:52.025 14:00:31 -- target/delete_subsystem.sh@57 -- # kill -0 72061 00:17:52.025 14:00:31 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:52.591 14:00:32 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:52.591 14:00:32 -- target/delete_subsystem.sh@57 -- # kill -0 72061 00:17:52.591 14:00:32 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:52.849 14:00:32 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:52.849 14:00:32 -- target/delete_subsystem.sh@57 -- # kill -0 72061 00:17:52.849 14:00:32 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:53.414 14:00:33 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:53.414 14:00:33 -- target/delete_subsystem.sh@57 -- # kill -0 72061 00:17:53.414 14:00:33 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:53.672 Initializing NVMe Controllers 00:17:53.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:53.672 Controller IO queue size 128, less than required. 00:17:53.672 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:53.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:53.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:53.672 Initialization complete. Launching workers. 00:17:53.672 ======================================================== 00:17:53.672 Latency(us) 00:17:53.672 Device Information : IOPS MiB/s Average min max 00:17:53.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003380.70 1000153.42 1011937.44 00:17:53.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004833.50 1000182.81 1014020.25 00:17:53.672 ======================================================== 00:17:53.672 Total : 256.00 0.12 1004107.10 1000153.42 1014020.25 00:17:53.672 00:17:53.965 14:00:33 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:53.965 14:00:33 -- target/delete_subsystem.sh@57 -- # kill -0 72061 00:17:53.965 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (72061) - No such process 00:17:53.965 14:00:33 -- target/delete_subsystem.sh@67 -- # wait 72061 00:17:53.965 14:00:33 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:53.965 14:00:33 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:17:53.965 14:00:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:53.965 14:00:33 -- nvmf/common.sh@117 -- # sync 00:17:54.223 14:00:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:54.223 14:00:33 -- nvmf/common.sh@120 -- # set +e 00:17:54.223 14:00:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:54.223 14:00:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:54.223 rmmod nvme_tcp 00:17:54.223 rmmod nvme_fabrics 00:17:54.223 rmmod nvme_keyring 00:17:54.223 14:00:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:54.223 14:00:33 -- nvmf/common.sh@124 -- # set -e 00:17:54.223 14:00:33 -- nvmf/common.sh@125 -- # return 0 00:17:54.223 14:00:33 -- nvmf/common.sh@478 -- # '[' -n 71963 ']' 00:17:54.223 14:00:33 -- nvmf/common.sh@479 -- # killprocess 71963 00:17:54.223 14:00:33 -- common/autotest_common.sh@936 -- # '[' -z 71963 ']' 00:17:54.223 14:00:33 -- common/autotest_common.sh@940 -- # kill -0 71963 00:17:54.223 14:00:33 -- common/autotest_common.sh@941 -- # uname 00:17:54.223 14:00:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:54.223 14:00:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71963 00:17:54.223 14:00:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:54.223 14:00:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:54.223 killing process with pid 71963 00:17:54.223 14:00:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71963' 00:17:54.223 14:00:33 -- common/autotest_common.sh@955 -- # kill 71963 00:17:54.223 14:00:33 -- common/autotest_common.sh@960 -- # wait 71963 00:17:55.599 14:00:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:55.599 14:00:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:55.599 14:00:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:55.599 14:00:35 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:55.599 14:00:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:55.599 14:00:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.599 14:00:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.599 14:00:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.599 14:00:35 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:55.599 00:17:55.599 real 0m10.673s 00:17:55.599 user 0m29.943s 00:17:55.599 sys 0m2.443s 00:17:55.599 14:00:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:55.599 14:00:35 -- common/autotest_common.sh@10 -- # set +x 00:17:55.599 ************************************ 00:17:55.599 END TEST nvmf_delete_subsystem 00:17:55.599 ************************************ 00:17:55.599 14:00:35 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:55.599 14:00:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:55.599 14:00:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:55.599 14:00:35 -- common/autotest_common.sh@10 -- # set +x 00:17:55.882 ************************************ 00:17:55.882 START TEST nvmf_ns_masking 00:17:55.882 ************************************ 00:17:55.882 14:00:35 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:55.882 * Looking for test storage... 00:17:55.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:55.882 14:00:35 -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:55.882 14:00:35 -- nvmf/common.sh@7 -- # uname -s 00:17:55.882 14:00:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.882 14:00:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.882 14:00:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.882 14:00:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.882 14:00:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.882 14:00:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.882 14:00:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.882 14:00:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.882 14:00:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.882 14:00:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.882 14:00:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:17:55.882 14:00:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:17:55.882 14:00:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.882 14:00:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.882 14:00:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:55.882 14:00:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.882 14:00:35 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:55.882 14:00:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.882 14:00:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.882 14:00:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.882 14:00:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.882 14:00:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.882 14:00:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.882 14:00:35 -- paths/export.sh@5 -- # export PATH 00:17:55.882 14:00:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.882 14:00:35 -- nvmf/common.sh@47 -- # : 0 00:17:55.882 14:00:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.882 14:00:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.882 14:00:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.882 14:00:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.882 14:00:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.882 14:00:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.882 14:00:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.882 14:00:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.882 14:00:35 -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:55.882 14:00:35 -- target/ns_masking.sh@11 -- # loops=5 00:17:55.882 14:00:35 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:55.882 14:00:35 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:17:55.882 14:00:35 -- target/ns_masking.sh@15 -- # uuidgen 00:17:55.882 14:00:35 -- target/ns_masking.sh@15 -- # HOSTID=32055c71-1abc-4e4f-b8a6-162ca9450851 00:17:55.882 14:00:35 -- target/ns_masking.sh@44 -- # nvmftestinit 00:17:55.882 14:00:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:55.882 14:00:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.882 14:00:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:55.882 14:00:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:55.882 14:00:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:55.882 14:00:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.882 14:00:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.882 14:00:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.882 14:00:35 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:55.882 14:00:35 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:55.882 14:00:35 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:55.882 14:00:35 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:55.882 14:00:35 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:55.882 14:00:35 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:55.882 14:00:35 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.882 14:00:35 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:55.882 14:00:35 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:55.882 14:00:35 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:55.882 14:00:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:55.882 14:00:35 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:55.882 14:00:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:55.882 14:00:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.882 14:00:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:55.882 14:00:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:55.882 14:00:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:55.882 14:00:35 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:55.882 14:00:35 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:55.882 14:00:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:55.882 Cannot find device "nvmf_tgt_br" 00:17:55.882 14:00:35 -- nvmf/common.sh@155 -- # true 00:17:55.882 14:00:35 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:55.882 Cannot find device "nvmf_tgt_br2" 00:17:55.882 14:00:35 -- nvmf/common.sh@156 -- # true 00:17:55.882 14:00:35 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:55.882 14:00:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:55.882 Cannot find device "nvmf_tgt_br" 00:17:55.882 14:00:35 -- nvmf/common.sh@158 -- # true 00:17:55.882 14:00:35 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:56.143 Cannot find device "nvmf_tgt_br2" 00:17:56.143 14:00:35 -- nvmf/common.sh@159 -- # true 00:17:56.143 14:00:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:56.143 14:00:35 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:56.143 14:00:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:56.143 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:56.143 14:00:35 -- nvmf/common.sh@162 -- # true 00:17:56.143 14:00:35 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:56.143 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:56.143 14:00:35 -- nvmf/common.sh@163 -- # true 00:17:56.143 14:00:35 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:56.143 14:00:35 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:56.143 14:00:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:56.143 14:00:35 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:56.143 14:00:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:56.143 14:00:35 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:56.143 14:00:35 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:56.143 14:00:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:56.143 14:00:35 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:56.143 14:00:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:56.143 14:00:35 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:56.143 14:00:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:56.143 14:00:35 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:56.143 14:00:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:56.143 14:00:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:56.143 14:00:35 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:56.143 14:00:35 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:56.143 14:00:35 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:56.143 14:00:35 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:56.143 14:00:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:56.143 14:00:35 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:56.143 14:00:35 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:56.143 14:00:35 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:56.402 14:00:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:56.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:56.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:17:56.402 00:17:56.402 --- 10.0.0.2 ping statistics --- 00:17:56.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.402 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:17:56.402 14:00:35 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:56.402 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:56.402 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:17:56.402 00:17:56.402 --- 10.0.0.3 ping statistics --- 00:17:56.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.402 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:17:56.402 14:00:35 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:56.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:56.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:17:56.402 00:17:56.402 --- 10.0.0.1 ping statistics --- 00:17:56.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.402 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:56.402 14:00:35 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.402 14:00:35 -- nvmf/common.sh@422 -- # return 0 00:17:56.402 14:00:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:56.402 14:00:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.402 14:00:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:56.402 14:00:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:56.402 14:00:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.402 14:00:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:56.402 14:00:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:56.402 14:00:35 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:17:56.402 14:00:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:56.402 14:00:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:56.402 14:00:35 -- common/autotest_common.sh@10 -- # set +x 00:17:56.402 14:00:35 -- nvmf/common.sh@470 -- # nvmfpid=72326 00:17:56.402 14:00:35 -- nvmf/common.sh@471 -- # waitforlisten 72326 00:17:56.402 14:00:35 -- common/autotest_common.sh@817 -- # '[' -z 72326 ']' 00:17:56.402 14:00:35 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:56.402 14:00:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.402 14:00:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:56.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.402 14:00:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.402 14:00:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:56.402 14:00:35 -- common/autotest_common.sh@10 -- # set +x 00:17:56.402 [2024-04-26 14:00:35.980455] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:17:56.402 [2024-04-26 14:00:35.980594] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.661 [2024-04-26 14:00:36.156149] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:56.922 [2024-04-26 14:00:36.397048] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.922 [2024-04-26 14:00:36.397349] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.922 [2024-04-26 14:00:36.397544] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.922 [2024-04-26 14:00:36.397561] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.922 [2024-04-26 14:00:36.397574] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.922 [2024-04-26 14:00:36.397754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.922 [2024-04-26 14:00:36.397934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.922 [2024-04-26 14:00:36.398617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.922 [2024-04-26 14:00:36.398650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:57.181 14:00:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:57.181 14:00:36 -- common/autotest_common.sh@850 -- # return 0 00:17:57.181 14:00:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:57.181 14:00:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:57.181 14:00:36 -- common/autotest_common.sh@10 -- # set +x 00:17:57.440 14:00:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.440 14:00:36 -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:57.440 [2024-04-26 14:00:37.051076] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.698 14:00:37 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:17:57.698 14:00:37 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:17:57.698 14:00:37 -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:57.957 Malloc1 00:17:57.957 14:00:37 -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:58.215 Malloc2 00:17:58.215 14:00:37 -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:58.474 14:00:37 -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:58.474 14:00:38 -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.733 [2024-04-26 14:00:38.321370] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.733 14:00:38 -- target/ns_masking.sh@61 -- # connect 00:17:58.733 14:00:38 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 32055c71-1abc-4e4f-b8a6-162ca9450851 -a 10.0.0.2 -s 4420 -i 4 00:17:58.992 14:00:38 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:17:58.992 14:00:38 -- common/autotest_common.sh@1184 -- # local i=0 00:17:58.992 14:00:38 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:58.992 14:00:38 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:58.992 14:00:38 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:00.898 14:00:40 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:00.898 14:00:40 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:00.898 14:00:40 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:00.898 14:00:40 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:00.898 14:00:40 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:00.898 14:00:40 -- common/autotest_common.sh@1194 -- # return 0 00:18:00.898 14:00:40 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:18:00.898 14:00:40 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:00.898 14:00:40 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:18:00.898 14:00:40 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:18:00.898 14:00:40 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:18:00.898 14:00:40 -- target/ns_masking.sh@39 -- # grep 0x1 00:18:00.898 14:00:40 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:00.898 [ 0]:0x1 00:18:00.898 14:00:40 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:00.898 14:00:40 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:01.164 14:00:40 -- target/ns_masking.sh@40 -- # nguid=84b0671d43544b8396525ec7cdde6ae7 00:18:01.164 14:00:40 -- target/ns_masking.sh@41 -- # [[ 84b0671d43544b8396525ec7cdde6ae7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.164 14:00:40 -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:01.164 14:00:40 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:18:01.164 14:00:40 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:01.164 14:00:40 -- target/ns_masking.sh@39 -- # grep 0x1 00:18:01.164 [ 0]:0x1 00:18:01.164 14:00:40 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:01.164 14:00:40 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:01.422 14:00:40 -- target/ns_masking.sh@40 -- # nguid=84b0671d43544b8396525ec7cdde6ae7 00:18:01.422 14:00:40 -- target/ns_masking.sh@41 -- # [[ 84b0671d43544b8396525ec7cdde6ae7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.422 14:00:40 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:18:01.422 14:00:40 -- target/ns_masking.sh@39 -- # grep 0x2 00:18:01.422 14:00:40 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:01.422 [ 1]:0x2 00:18:01.422 14:00:40 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:01.422 14:00:40 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:01.422 14:00:40 -- target/ns_masking.sh@40 -- # nguid=b8db9144a4614662898c7534c71d9eca 00:18:01.422 14:00:40 -- target/ns_masking.sh@41 -- # [[ b8db9144a4614662898c7534c71d9eca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.422 14:00:40 -- target/ns_masking.sh@69 -- # disconnect 00:18:01.422 14:00:40 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:01.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.422 14:00:41 -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:01.684 14:00:41 -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:01.977 14:00:41 -- target/ns_masking.sh@77 -- # connect 1 00:18:01.977 14:00:41 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 32055c71-1abc-4e4f-b8a6-162ca9450851 -a 10.0.0.2 -s 4420 -i 4 00:18:01.977 14:00:41 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:01.977 14:00:41 -- common/autotest_common.sh@1184 -- # local i=0 00:18:01.977 14:00:41 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.977 14:00:41 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:18:01.977 14:00:41 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:18:01.977 14:00:41 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:04.507 14:00:43 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:04.507 14:00:43 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:04.507 14:00:43 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:04.507 14:00:43 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:18:04.507 14:00:43 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:04.507 14:00:43 -- common/autotest_common.sh@1194 -- # return 0 00:18:04.507 14:00:43 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:18:04.507 14:00:43 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:04.507 14:00:43 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:18:04.507 14:00:43 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:18:04.507 14:00:43 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:18:04.507 14:00:43 -- common/autotest_common.sh@638 -- # local es=0 00:18:04.507 14:00:43 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:18:04.507 14:00:43 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:18:04.507 14:00:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:04.507 14:00:43 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:18:04.507 14:00:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:04.507 14:00:43 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:18:04.507 14:00:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:04.507 14:00:43 -- target/ns_masking.sh@39 -- # grep 0x1 00:18:04.507 14:00:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:04.507 14:00:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:04.507 14:00:43 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:04.507 14:00:43 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:04.507 14:00:43 -- common/autotest_common.sh@641 -- # es=1 00:18:04.507 14:00:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:04.507 14:00:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:04.507 14:00:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:04.507 14:00:43 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:18:04.507 14:00:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:04.507 14:00:43 -- target/ns_masking.sh@39 -- # grep 0x2 00:18:04.507 [ 0]:0x2 00:18:04.507 14:00:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:04.507 14:00:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:04.507 14:00:43 -- target/ns_masking.sh@40 -- # nguid=b8db9144a4614662898c7534c71d9eca 00:18:04.507 14:00:43 -- target/ns_masking.sh@41 -- # [[ b8db9144a4614662898c7534c71d9eca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:04.507 14:00:43 -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:04.507 14:00:43 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:18:04.507 14:00:43 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:04.507 14:00:43 -- target/ns_masking.sh@39 -- # grep 0x1 00:18:04.507 [ 0]:0x1 00:18:04.507 14:00:43 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:04.507 14:00:43 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:04.507 14:00:44 -- target/ns_masking.sh@40 -- # nguid=84b0671d43544b8396525ec7cdde6ae7 00:18:04.507 14:00:44 -- target/ns_masking.sh@41 -- # [[ 84b0671d43544b8396525ec7cdde6ae7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:04.507 14:00:44 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:18:04.507 14:00:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:04.507 14:00:44 -- target/ns_masking.sh@39 -- # grep 0x2 00:18:04.507 [ 1]:0x2 00:18:04.507 14:00:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:04.507 14:00:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:04.507 14:00:44 -- target/ns_masking.sh@40 -- # nguid=b8db9144a4614662898c7534c71d9eca 00:18:04.507 14:00:44 -- target/ns_masking.sh@41 -- # [[ b8db9144a4614662898c7534c71d9eca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:04.507 14:00:44 -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:04.765 14:00:44 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:18:04.765 14:00:44 -- common/autotest_common.sh@638 -- # local es=0 00:18:04.765 14:00:44 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:18:04.765 14:00:44 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:18:04.765 14:00:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:04.765 14:00:44 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:18:04.765 14:00:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:04.765 14:00:44 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:18:04.765 14:00:44 -- target/ns_masking.sh@39 -- # grep 0x1 00:18:04.765 14:00:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:04.766 14:00:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:04.766 14:00:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:04.766 14:00:44 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:04.766 14:00:44 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:04.766 14:00:44 -- common/autotest_common.sh@641 -- # es=1 00:18:04.766 14:00:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:04.766 14:00:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:04.766 14:00:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:04.766 14:00:44 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:18:04.766 14:00:44 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:04.766 14:00:44 -- target/ns_masking.sh@39 -- # grep 0x2 00:18:04.766 [ 0]:0x2 00:18:04.766 14:00:44 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:04.766 14:00:44 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:04.766 14:00:44 -- target/ns_masking.sh@40 -- # nguid=b8db9144a4614662898c7534c71d9eca 00:18:04.766 14:00:44 -- target/ns_masking.sh@41 -- # [[ b8db9144a4614662898c7534c71d9eca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:04.766 14:00:44 -- target/ns_masking.sh@91 -- # disconnect 00:18:04.766 14:00:44 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:04.766 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:04.766 14:00:44 -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:05.023 14:00:44 -- target/ns_masking.sh@95 -- # connect 2 00:18:05.023 14:00:44 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 32055c71-1abc-4e4f-b8a6-162ca9450851 -a 10.0.0.2 -s 4420 -i 4 00:18:05.282 14:00:44 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:05.282 14:00:44 -- common/autotest_common.sh@1184 -- # local i=0 00:18:05.282 14:00:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:05.282 14:00:44 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:18:05.282 14:00:44 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:18:05.282 14:00:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:07.179 14:00:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:07.179 14:00:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:07.179 14:00:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:07.179 14:00:46 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:18:07.179 14:00:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:07.179 14:00:46 -- common/autotest_common.sh@1194 -- # return 0 00:18:07.179 14:00:46 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:18:07.179 14:00:46 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:07.179 14:00:46 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:18:07.179 14:00:46 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:18:07.180 14:00:46 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:18:07.180 14:00:46 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:07.180 14:00:46 -- target/ns_masking.sh@39 -- # grep 0x1 00:18:07.180 [ 0]:0x1 00:18:07.180 14:00:46 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:07.180 14:00:46 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:07.437 14:00:46 -- target/ns_masking.sh@40 -- # nguid=84b0671d43544b8396525ec7cdde6ae7 00:18:07.437 14:00:46 -- target/ns_masking.sh@41 -- # [[ 84b0671d43544b8396525ec7cdde6ae7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:07.437 14:00:46 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:18:07.437 14:00:46 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:07.437 14:00:46 -- target/ns_masking.sh@39 -- # grep 0x2 00:18:07.437 [ 1]:0x2 00:18:07.437 14:00:46 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:07.437 14:00:46 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:07.437 14:00:46 -- target/ns_masking.sh@40 -- # nguid=b8db9144a4614662898c7534c71d9eca 00:18:07.437 14:00:46 -- target/ns_masking.sh@41 -- # [[ b8db9144a4614662898c7534c71d9eca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:07.437 14:00:46 -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:07.695 14:00:47 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:18:07.695 14:00:47 -- common/autotest_common.sh@638 -- # local es=0 00:18:07.695 14:00:47 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:18:07.695 14:00:47 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:18:07.695 14:00:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:07.695 14:00:47 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:18:07.695 14:00:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:07.695 14:00:47 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:18:07.695 14:00:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:07.695 14:00:47 -- target/ns_masking.sh@39 -- # grep 0x1 00:18:07.695 14:00:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:07.695 14:00:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:07.695 14:00:47 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:07.695 14:00:47 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:07.695 14:00:47 -- common/autotest_common.sh@641 -- # es=1 00:18:07.695 14:00:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:07.695 14:00:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:07.695 14:00:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:07.695 14:00:47 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:18:07.695 14:00:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:07.695 14:00:47 -- target/ns_masking.sh@39 -- # grep 0x2 00:18:07.695 [ 0]:0x2 00:18:07.695 14:00:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:07.695 14:00:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:07.695 14:00:47 -- target/ns_masking.sh@40 -- # nguid=b8db9144a4614662898c7534c71d9eca 00:18:07.695 14:00:47 -- target/ns_masking.sh@41 -- # [[ b8db9144a4614662898c7534c71d9eca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:07.695 14:00:47 -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:07.695 14:00:47 -- common/autotest_common.sh@638 -- # local es=0 00:18:07.695 14:00:47 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:07.695 14:00:47 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:07.695 14:00:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:07.695 14:00:47 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:07.695 14:00:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:07.695 14:00:47 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:07.695 14:00:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:07.695 14:00:47 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:07.695 14:00:47 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:07.695 14:00:47 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:07.962 [2024-04-26 14:00:47.484763] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:07.962 2024/04/26 14:00:47 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:18:07.962 request: 00:18:07.962 { 00:18:07.962 "method": "nvmf_ns_remove_host", 00:18:07.962 "params": { 00:18:07.962 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.962 "nsid": 2, 00:18:07.962 "host": "nqn.2016-06.io.spdk:host1" 00:18:07.963 } 00:18:07.963 } 00:18:07.963 Got JSON-RPC error response 00:18:07.963 GoRPCClient: error on JSON-RPC call 00:18:07.963 14:00:47 -- common/autotest_common.sh@641 -- # es=1 00:18:07.963 14:00:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:07.963 14:00:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:07.963 14:00:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:07.963 14:00:47 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:18:07.963 14:00:47 -- common/autotest_common.sh@638 -- # local es=0 00:18:07.963 14:00:47 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:18:07.963 14:00:47 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:18:07.963 14:00:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:07.963 14:00:47 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:18:07.963 14:00:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:07.963 14:00:47 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:18:07.963 14:00:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:07.963 14:00:47 -- target/ns_masking.sh@39 -- # grep 0x1 00:18:07.963 14:00:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:07.963 14:00:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:07.963 14:00:47 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:18:07.963 14:00:47 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:07.963 14:00:47 -- common/autotest_common.sh@641 -- # es=1 00:18:07.963 14:00:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:07.963 14:00:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:07.963 14:00:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:07.963 14:00:47 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:18:07.963 14:00:47 -- target/ns_masking.sh@39 -- # grep 0x2 00:18:07.963 14:00:47 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:18:07.963 [ 0]:0x2 00:18:07.963 14:00:47 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:07.963 14:00:47 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:18:08.221 14:00:47 -- target/ns_masking.sh@40 -- # nguid=b8db9144a4614662898c7534c71d9eca 00:18:08.221 14:00:47 -- target/ns_masking.sh@41 -- # [[ b8db9144a4614662898c7534c71d9eca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:08.221 14:00:47 -- target/ns_masking.sh@108 -- # disconnect 00:18:08.221 14:00:47 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:08.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.222 14:00:47 -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:08.480 14:00:47 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:08.480 14:00:47 -- target/ns_masking.sh@114 -- # nvmftestfini 00:18:08.480 14:00:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:08.480 14:00:47 -- nvmf/common.sh@117 -- # sync 00:18:08.480 14:00:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:08.480 14:00:47 -- nvmf/common.sh@120 -- # set +e 00:18:08.480 14:00:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:08.480 14:00:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:08.480 rmmod nvme_tcp 00:18:08.480 rmmod nvme_fabrics 00:18:08.480 rmmod nvme_keyring 00:18:08.480 14:00:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:08.480 14:00:47 -- nvmf/common.sh@124 -- # set -e 00:18:08.480 14:00:47 -- nvmf/common.sh@125 -- # return 0 00:18:08.480 14:00:47 -- nvmf/common.sh@478 -- # '[' -n 72326 ']' 00:18:08.480 14:00:47 -- nvmf/common.sh@479 -- # killprocess 72326 00:18:08.480 14:00:47 -- common/autotest_common.sh@936 -- # '[' -z 72326 ']' 00:18:08.480 14:00:47 -- common/autotest_common.sh@940 -- # kill -0 72326 00:18:08.480 14:00:47 -- common/autotest_common.sh@941 -- # uname 00:18:08.480 14:00:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:08.480 14:00:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72326 00:18:08.480 14:00:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:08.480 14:00:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:08.480 killing process with pid 72326 00:18:08.480 14:00:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72326' 00:18:08.480 14:00:48 -- common/autotest_common.sh@955 -- # kill 72326 00:18:08.480 14:00:48 -- common/autotest_common.sh@960 -- # wait 72326 00:18:10.383 14:00:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:10.383 14:00:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:10.384 14:00:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:10.384 14:00:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:10.384 14:00:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:10.384 14:00:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.384 14:00:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:10.384 14:00:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.384 14:00:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:10.384 00:18:10.384 real 0m14.529s 00:18:10.384 user 0m54.248s 00:18:10.384 sys 0m3.218s 00:18:10.384 14:00:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:10.384 14:00:49 -- common/autotest_common.sh@10 -- # set +x 00:18:10.384 ************************************ 00:18:10.384 END TEST nvmf_ns_masking 00:18:10.384 ************************************ 00:18:10.384 14:00:49 -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:18:10.384 14:00:49 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:18:10.384 14:00:49 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:18:10.384 14:00:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:10.384 14:00:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:10.384 14:00:49 -- common/autotest_common.sh@10 -- # set +x 00:18:10.384 ************************************ 00:18:10.384 START TEST nvmf_host_management 00:18:10.384 ************************************ 00:18:10.384 14:00:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:18:10.652 * Looking for test storage... 00:18:10.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:10.652 14:00:50 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:10.652 14:00:50 -- nvmf/common.sh@7 -- # uname -s 00:18:10.652 14:00:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.652 14:00:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.652 14:00:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.652 14:00:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.652 14:00:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.652 14:00:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.652 14:00:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.652 14:00:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.652 14:00:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.652 14:00:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.652 14:00:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:18:10.652 14:00:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:18:10.652 14:00:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.652 14:00:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.652 14:00:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:10.652 14:00:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:10.652 14:00:50 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:10.652 14:00:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.653 14:00:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.653 14:00:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.653 14:00:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.653 14:00:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.653 14:00:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.653 14:00:50 -- paths/export.sh@5 -- # export PATH 00:18:10.653 14:00:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.653 14:00:50 -- nvmf/common.sh@47 -- # : 0 00:18:10.653 14:00:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:10.653 14:00:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:10.653 14:00:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:10.653 14:00:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.653 14:00:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.653 14:00:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:10.653 14:00:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:10.653 14:00:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:10.653 14:00:50 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:10.653 14:00:50 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:10.653 14:00:50 -- target/host_management.sh@105 -- # nvmftestinit 00:18:10.653 14:00:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:10.653 14:00:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.653 14:00:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:10.653 14:00:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:10.653 14:00:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:10.653 14:00:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.653 14:00:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:10.653 14:00:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.653 14:00:50 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:10.653 14:00:50 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:10.653 14:00:50 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:10.653 14:00:50 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:10.653 14:00:50 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:10.653 14:00:50 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:10.653 14:00:50 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.653 14:00:50 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:10.653 14:00:50 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:10.653 14:00:50 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:10.654 14:00:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:10.654 14:00:50 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:10.654 14:00:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:10.654 14:00:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.654 14:00:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:10.654 14:00:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:10.654 14:00:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:10.654 14:00:50 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:10.654 14:00:50 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:10.654 14:00:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:10.654 Cannot find device "nvmf_tgt_br" 00:18:10.654 14:00:50 -- nvmf/common.sh@155 -- # true 00:18:10.654 14:00:50 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:10.654 Cannot find device "nvmf_tgt_br2" 00:18:10.654 14:00:50 -- nvmf/common.sh@156 -- # true 00:18:10.654 14:00:50 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:10.654 14:00:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:10.654 Cannot find device "nvmf_tgt_br" 00:18:10.654 14:00:50 -- nvmf/common.sh@158 -- # true 00:18:10.654 14:00:50 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:10.654 Cannot find device "nvmf_tgt_br2" 00:18:10.654 14:00:50 -- nvmf/common.sh@159 -- # true 00:18:10.654 14:00:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:10.913 14:00:50 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:10.913 14:00:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:10.913 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.913 14:00:50 -- nvmf/common.sh@162 -- # true 00:18:10.913 14:00:50 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:10.913 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:10.913 14:00:50 -- nvmf/common.sh@163 -- # true 00:18:10.913 14:00:50 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:10.913 14:00:50 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:10.913 14:00:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:10.913 14:00:50 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:10.913 14:00:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:10.913 14:00:50 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:10.913 14:00:50 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:10.913 14:00:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:10.913 14:00:50 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:10.913 14:00:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:10.913 14:00:50 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:10.913 14:00:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:10.913 14:00:50 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:10.913 14:00:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:10.913 14:00:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:10.913 14:00:50 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:10.913 14:00:50 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:10.913 14:00:50 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:10.913 14:00:50 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:10.913 14:00:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:10.913 14:00:50 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:10.913 14:00:50 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:10.913 14:00:50 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:10.914 14:00:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:10.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:18:10.914 00:18:10.914 --- 10.0.0.2 ping statistics --- 00:18:10.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.914 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:18:10.914 14:00:50 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:10.914 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:10.914 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:18:10.914 00:18:10.914 --- 10.0.0.3 ping statistics --- 00:18:10.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.914 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:18:10.914 14:00:50 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:10.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:18:10.914 00:18:10.914 --- 10.0.0.1 ping statistics --- 00:18:10.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.914 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:10.914 14:00:50 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.914 14:00:50 -- nvmf/common.sh@422 -- # return 0 00:18:10.914 14:00:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:10.914 14:00:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.914 14:00:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:10.914 14:00:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:10.914 14:00:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.914 14:00:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:10.914 14:00:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:11.172 14:00:50 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:18:11.172 14:00:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:11.172 14:00:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:11.172 14:00:50 -- common/autotest_common.sh@10 -- # set +x 00:18:11.172 ************************************ 00:18:11.172 START TEST nvmf_host_management 00:18:11.172 ************************************ 00:18:11.172 14:00:50 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:18:11.172 14:00:50 -- target/host_management.sh@69 -- # starttarget 00:18:11.172 14:00:50 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:18:11.172 14:00:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:11.172 14:00:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:11.172 14:00:50 -- common/autotest_common.sh@10 -- # set +x 00:18:11.172 14:00:50 -- nvmf/common.sh@470 -- # nvmfpid=72902 00:18:11.172 14:00:50 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:11.172 14:00:50 -- nvmf/common.sh@471 -- # waitforlisten 72902 00:18:11.172 14:00:50 -- common/autotest_common.sh@817 -- # '[' -z 72902 ']' 00:18:11.172 14:00:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.172 14:00:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:11.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.172 14:00:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.172 14:00:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:11.172 14:00:50 -- common/autotest_common.sh@10 -- # set +x 00:18:11.172 [2024-04-26 14:00:50.821608] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:11.172 [2024-04-26 14:00:50.821726] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.431 [2024-04-26 14:00:50.998639] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:11.688 [2024-04-26 14:00:51.248122] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.688 [2024-04-26 14:00:51.248179] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.688 [2024-04-26 14:00:51.248196] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.688 [2024-04-26 14:00:51.248207] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.688 [2024-04-26 14:00:51.248220] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.688 [2024-04-26 14:00:51.248371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.688 [2024-04-26 14:00:51.249483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:11.688 [2024-04-26 14:00:51.249702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:11.688 [2024-04-26 14:00:51.249838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.255 14:00:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:12.255 14:00:51 -- common/autotest_common.sh@850 -- # return 0 00:18:12.255 14:00:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:12.255 14:00:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:12.255 14:00:51 -- common/autotest_common.sh@10 -- # set +x 00:18:12.255 14:00:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.255 14:00:51 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:12.255 14:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.255 14:00:51 -- common/autotest_common.sh@10 -- # set +x 00:18:12.255 [2024-04-26 14:00:51.755213] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.255 14:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.255 14:00:51 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:18:12.255 14:00:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:12.255 14:00:51 -- common/autotest_common.sh@10 -- # set +x 00:18:12.255 14:00:51 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:18:12.255 14:00:51 -- target/host_management.sh@23 -- # cat 00:18:12.255 14:00:51 -- target/host_management.sh@30 -- # rpc_cmd 00:18:12.255 14:00:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:12.255 14:00:51 -- common/autotest_common.sh@10 -- # set +x 00:18:12.255 Malloc0 00:18:12.255 [2024-04-26 14:00:51.922113] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.514 14:00:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:12.514 14:00:51 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:18:12.514 14:00:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:12.514 14:00:51 -- common/autotest_common.sh@10 -- # set +x 00:18:12.514 14:00:51 -- target/host_management.sh@73 -- # perfpid=72980 00:18:12.514 14:00:51 -- target/host_management.sh@74 -- # waitforlisten 72980 /var/tmp/bdevperf.sock 00:18:12.514 14:00:51 -- common/autotest_common.sh@817 -- # '[' -z 72980 ']' 00:18:12.514 14:00:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.514 14:00:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:12.514 14:00:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.514 14:00:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:12.514 14:00:51 -- common/autotest_common.sh@10 -- # set +x 00:18:12.514 14:00:51 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:18:12.514 14:00:51 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:12.514 14:00:51 -- nvmf/common.sh@521 -- # config=() 00:18:12.514 14:00:51 -- nvmf/common.sh@521 -- # local subsystem config 00:18:12.514 14:00:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:12.514 14:00:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:12.514 { 00:18:12.514 "params": { 00:18:12.514 "name": "Nvme$subsystem", 00:18:12.514 "trtype": "$TEST_TRANSPORT", 00:18:12.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:12.514 "adrfam": "ipv4", 00:18:12.514 "trsvcid": "$NVMF_PORT", 00:18:12.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:12.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:12.514 "hdgst": ${hdgst:-false}, 00:18:12.514 "ddgst": ${ddgst:-false} 00:18:12.514 }, 00:18:12.514 "method": "bdev_nvme_attach_controller" 00:18:12.514 } 00:18:12.514 EOF 00:18:12.514 )") 00:18:12.514 14:00:51 -- nvmf/common.sh@543 -- # cat 00:18:12.514 14:00:51 -- nvmf/common.sh@545 -- # jq . 00:18:12.514 14:00:52 -- nvmf/common.sh@546 -- # IFS=, 00:18:12.514 14:00:52 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:12.514 "params": { 00:18:12.514 "name": "Nvme0", 00:18:12.514 "trtype": "tcp", 00:18:12.514 "traddr": "10.0.0.2", 00:18:12.514 "adrfam": "ipv4", 00:18:12.514 "trsvcid": "4420", 00:18:12.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:12.514 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:12.514 "hdgst": false, 00:18:12.514 "ddgst": false 00:18:12.514 }, 00:18:12.514 "method": "bdev_nvme_attach_controller" 00:18:12.514 }' 00:18:12.514 [2024-04-26 14:00:52.084650] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:12.514 [2024-04-26 14:00:52.084779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72980 ] 00:18:12.774 [2024-04-26 14:00:52.243550] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.036 [2024-04-26 14:00:52.497919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.602 Running I/O for 10 seconds... 00:18:13.602 14:00:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:13.602 14:00:53 -- common/autotest_common.sh@850 -- # return 0 00:18:13.602 14:00:53 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:13.602 14:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.602 14:00:53 -- common/autotest_common.sh@10 -- # set +x 00:18:13.602 14:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.602 14:00:53 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:13.602 14:00:53 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:18:13.602 14:00:53 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:13.602 14:00:53 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:18:13.602 14:00:53 -- target/host_management.sh@52 -- # local ret=1 00:18:13.602 14:00:53 -- target/host_management.sh@53 -- # local i 00:18:13.602 14:00:53 -- target/host_management.sh@54 -- # (( i = 10 )) 00:18:13.602 14:00:53 -- target/host_management.sh@54 -- # (( i != 0 )) 00:18:13.602 14:00:53 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:18:13.602 14:00:53 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:18:13.602 14:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.602 14:00:53 -- common/autotest_common.sh@10 -- # set +x 00:18:13.602 14:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.603 14:00:53 -- target/host_management.sh@55 -- # read_io_count=67 00:18:13.603 14:00:53 -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:18:13.603 14:00:53 -- target/host_management.sh@62 -- # sleep 0.25 00:18:13.862 14:00:53 -- target/host_management.sh@54 -- # (( i-- )) 00:18:13.862 14:00:53 -- target/host_management.sh@54 -- # (( i != 0 )) 00:18:13.862 14:00:53 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:18:13.862 14:00:53 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:18:13.862 14:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.862 14:00:53 -- common/autotest_common.sh@10 -- # set +x 00:18:13.862 14:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.862 14:00:53 -- target/host_management.sh@55 -- # read_io_count=579 00:18:13.862 14:00:53 -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:18:13.862 14:00:53 -- target/host_management.sh@59 -- # ret=0 00:18:13.862 14:00:53 -- target/host_management.sh@60 -- # break 00:18:13.862 14:00:53 -- target/host_management.sh@64 -- # return 0 00:18:13.862 14:00:53 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:18:13.862 14:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.862 14:00:53 -- common/autotest_common.sh@10 -- # set +x 00:18:13.862 [2024-04-26 14:00:53.429875] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.862 [2024-04-26 14:00:53.430041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-04-26 14:00:53.430084] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.862 [2024-04-26 14:00:53.430105] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.862 [2024-04-26 14:00:53.430117] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.862 [2024-04-26 14:00:53.430129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.862 [2024-04-26 14:00:53.430140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.862 [2024-04-26 14:00:53.430162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.862 [2024-04-26 14:00:53.430174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.862 [2024-04-26 14:00:53.430185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.862 [2024-04-26 14:00:53.430196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.862 [2024-04-26 14:00:53.430207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.862 [2024-04-26 14:00:53.430218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.862 [2024-04-26 14:00:53.430228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.862 [2024-04-26 14:00:53.430239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.863 [2024-04-26 14:00:53.430250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.863 [2024-04-26 14:00:53.430261] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.863 [2024-04-26 14:00:53.430272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.863 [2024-04-26 14:00:53.430283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.863 [2024-04-26 14:00:53.430294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.863 [2024-04-26 14:00:53.430304] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.863 [2024-04-26 14:00:53.430315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.863 [2024-04-26 14:00:53.430326] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.863 [2024-04-26 14:00:53.430336] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.863 [2024-04-26 14:00:53.430347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.863 [2024-04-26 14:00:53.430358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.863 [2024-04-26 14:00:53.430368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.863 [2024-04-26 14:00:53.430379] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.863 [2024-04-26 14:00:53.430389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:18:13.863 id:0 cdw10:00000000 cdw11:00000000 00:18:13.863 [2024-04-26 14:00:53.430519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.430548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.863 [2024-04-26 14:00:53.430563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.430578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.863 [2024-04-26 14:00:53.430590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.430604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.863 [2024-04-26 14:00:53.430617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.430630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005a40 is same with the state(5) to be set 00:18:13.863 [2024-04-26 14:00:53.430718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.430734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.430760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.430783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.430798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.430828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.430843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.430855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.430869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.430881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.430896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.430908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.430923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.430935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.430949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.430961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.430986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.430999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.431012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.431024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.431037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.431049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.431067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.431079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.431093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.431104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.431118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.431130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.431143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.431155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.431169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.431181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.431373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.431490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.431551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.431701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.431747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.431761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.431776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.431788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.431802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.431814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.431828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.431840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.431870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.431883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.431897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.431909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.431923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.431935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.431948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.431960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.431974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.431986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.863 [2024-04-26 14:00:53.432873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.863 [2024-04-26 14:00:53.432886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.864 [2024-04-26 14:00:53.432901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.864 [2024-04-26 14:00:53.432913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.864 [2024-04-26 14:00:53.432927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.864 [2024-04-26 14:00:53.432939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.864 [2024-04-26 14:00:53.432952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.864 [2024-04-26 14:00:53.432964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.864 [2024-04-26 14:00:53.432978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.864 [2024-04-26 14:00:53.432990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.864 [2024-04-26 14:00:53.433004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:13.864 [2024-04-26 14:00:53.433016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:13.864 [2024-04-26 14:00:53.433360] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007e40 was disconnected and freed. reset controller. 00:18:13.864 14:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.864 14:00:53 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:18:13.864 14:00:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:13.864 [2024-04-26 14:00:53.434418] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:13.864 14:00:53 -- common/autotest_common.sh@10 -- # set +x 00:18:13.864 task offset: 90112 on job bdev=Nvme0n1 fails 00:18:13.864 00:18:13.864 Latency(us) 00:18:13.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.864 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:13.864 Job: Nvme0n1 ended in about 0.44 seconds with error 00:18:13.864 Verification LBA range: start 0x0 length 0x400 00:18:13.864 Nvme0n1 : 0.44 1614.93 100.93 146.81 0.00 35290.41 4632.26 34110.30 00:18:13.864 =================================================================================================================== 00:18:13.864 Total : 1614.93 100.93 146.81 0.00 35290.41 4632.26 34110.30 00:18:13.864 [2024-04-26 14:00:53.439551] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:13.864 [2024-04-26 14:00:53.439597] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:18:13.864 14:00:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:13.864 14:00:53 -- target/host_management.sh@87 -- # sleep 1 00:18:13.864 [2024-04-26 14:00:53.449590] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:14.799 14:00:54 -- target/host_management.sh@91 -- # kill -9 72980 00:18:14.799 14:00:54 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:18:14.799 14:00:54 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:14.799 14:00:54 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:18:14.799 14:00:54 -- nvmf/common.sh@521 -- # config=() 00:18:14.799 14:00:54 -- nvmf/common.sh@521 -- # local subsystem config 00:18:14.799 14:00:54 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:14.799 14:00:54 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:14.799 { 00:18:14.799 "params": { 00:18:14.799 "name": "Nvme$subsystem", 00:18:14.799 "trtype": "$TEST_TRANSPORT", 00:18:14.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:14.799 "adrfam": "ipv4", 00:18:14.799 "trsvcid": "$NVMF_PORT", 00:18:14.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:14.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:14.799 "hdgst": ${hdgst:-false}, 00:18:14.799 "ddgst": ${ddgst:-false} 00:18:14.799 }, 00:18:14.799 "method": "bdev_nvme_attach_controller" 00:18:14.799 } 00:18:14.799 EOF 00:18:14.799 )") 00:18:14.799 14:00:54 -- nvmf/common.sh@543 -- # cat 00:18:14.799 14:00:54 -- nvmf/common.sh@545 -- # jq . 00:18:14.799 14:00:54 -- nvmf/common.sh@546 -- # IFS=, 00:18:14.799 14:00:54 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:14.799 "params": { 00:18:14.799 "name": "Nvme0", 00:18:14.799 "trtype": "tcp", 00:18:14.799 "traddr": "10.0.0.2", 00:18:14.799 "adrfam": "ipv4", 00:18:14.799 "trsvcid": "4420", 00:18:14.799 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:14.799 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:14.799 "hdgst": false, 00:18:14.799 "ddgst": false 00:18:14.799 }, 00:18:14.799 "method": "bdev_nvme_attach_controller" 00:18:14.799 }' 00:18:15.057 [2024-04-26 14:00:54.545538] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:15.057 [2024-04-26 14:00:54.545680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73034 ] 00:18:15.057 [2024-04-26 14:00:54.719206] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.315 [2024-04-26 14:00:54.964303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.881 Running I/O for 1 seconds... 00:18:16.818 00:18:16.818 Latency(us) 00:18:16.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.818 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:16.818 Verification LBA range: start 0x0 length 0x400 00:18:16.818 Nvme0n1 : 1.02 1751.96 109.50 0.00 0.00 35902.37 5553.45 32636.40 00:18:16.818 =================================================================================================================== 00:18:16.818 Total : 1751.96 109.50 0.00 0.00 35902.37 5553.45 32636.40 00:18:18.197 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 68: 72980 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:18:18.197 14:00:57 -- target/host_management.sh@102 -- # stoptarget 00:18:18.197 14:00:57 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:18:18.197 14:00:57 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:18:18.197 14:00:57 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:18:18.197 14:00:57 -- target/host_management.sh@40 -- # nvmftestfini 00:18:18.197 14:00:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:18.197 14:00:57 -- nvmf/common.sh@117 -- # sync 00:18:18.197 14:00:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:18.197 14:00:57 -- nvmf/common.sh@120 -- # set +e 00:18:18.197 14:00:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:18.197 14:00:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:18.456 rmmod nvme_tcp 00:18:18.456 rmmod nvme_fabrics 00:18:18.456 rmmod nvme_keyring 00:18:18.456 14:00:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:18.456 14:00:57 -- nvmf/common.sh@124 -- # set -e 00:18:18.456 14:00:57 -- nvmf/common.sh@125 -- # return 0 00:18:18.456 14:00:57 -- nvmf/common.sh@478 -- # '[' -n 72902 ']' 00:18:18.456 14:00:57 -- nvmf/common.sh@479 -- # killprocess 72902 00:18:18.456 14:00:57 -- common/autotest_common.sh@936 -- # '[' -z 72902 ']' 00:18:18.456 14:00:57 -- common/autotest_common.sh@940 -- # kill -0 72902 00:18:18.456 14:00:57 -- common/autotest_common.sh@941 -- # uname 00:18:18.456 14:00:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.456 14:00:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72902 00:18:18.456 killing process with pid 72902 00:18:18.456 14:00:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:18.456 14:00:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:18.456 14:00:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72902' 00:18:18.456 14:00:57 -- common/autotest_common.sh@955 -- # kill 72902 00:18:18.456 14:00:57 -- common/autotest_common.sh@960 -- # wait 72902 00:18:19.834 [2024-04-26 14:00:59.409919] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:18:19.834 14:00:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:19.834 14:00:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:19.834 14:00:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:19.834 14:00:59 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.834 14:00:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:19.835 14:00:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.835 14:00:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.835 14:00:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.096 14:00:59 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:20.096 00:18:20.096 real 0m8.846s 00:18:20.096 user 0m36.663s 00:18:20.096 sys 0m1.596s 00:18:20.096 14:00:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:20.096 14:00:59 -- common/autotest_common.sh@10 -- # set +x 00:18:20.096 ************************************ 00:18:20.096 END TEST nvmf_host_management 00:18:20.096 ************************************ 00:18:20.096 14:00:59 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:18:20.096 00:18:20.096 real 0m9.640s 00:18:20.096 user 0m36.890s 00:18:20.096 sys 0m1.991s 00:18:20.096 14:00:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:20.096 14:00:59 -- common/autotest_common.sh@10 -- # set +x 00:18:20.096 ************************************ 00:18:20.096 END TEST nvmf_host_management 00:18:20.096 ************************************ 00:18:20.096 14:00:59 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:18:20.096 14:00:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:20.096 14:00:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:20.096 14:00:59 -- common/autotest_common.sh@10 -- # set +x 00:18:20.096 ************************************ 00:18:20.096 START TEST nvmf_lvol 00:18:20.096 ************************************ 00:18:20.096 14:00:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:18:20.356 * Looking for test storage... 00:18:20.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:20.356 14:00:59 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:20.356 14:00:59 -- nvmf/common.sh@7 -- # uname -s 00:18:20.356 14:00:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.356 14:00:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.356 14:00:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.356 14:00:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.356 14:00:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.356 14:00:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.356 14:00:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.356 14:00:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.356 14:00:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.356 14:00:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.356 14:00:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:18:20.356 14:00:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:18:20.356 14:00:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.356 14:00:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.356 14:00:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:20.356 14:00:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.356 14:00:59 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:20.356 14:00:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.356 14:00:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.356 14:00:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.356 14:00:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.356 14:00:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.356 14:00:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.356 14:00:59 -- paths/export.sh@5 -- # export PATH 00:18:20.356 14:00:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.356 14:00:59 -- nvmf/common.sh@47 -- # : 0 00:18:20.356 14:00:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:20.356 14:00:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:20.356 14:00:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.356 14:00:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.356 14:00:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.356 14:00:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:20.356 14:00:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:20.356 14:00:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:20.356 14:00:59 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:20.356 14:00:59 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:20.356 14:00:59 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:18:20.356 14:00:59 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:18:20.356 14:00:59 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:20.356 14:00:59 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:18:20.356 14:00:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:20.356 14:00:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.356 14:00:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:20.356 14:00:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:20.356 14:00:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:20.356 14:00:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.356 14:00:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.356 14:00:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.356 14:00:59 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:20.356 14:00:59 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:20.356 14:00:59 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:20.356 14:00:59 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:20.356 14:00:59 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:20.356 14:00:59 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:20.356 14:00:59 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:20.356 14:00:59 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:20.356 14:00:59 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:20.356 14:00:59 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:20.356 14:00:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:20.356 14:00:59 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:20.356 14:00:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:20.356 14:00:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:20.356 14:00:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:20.356 14:00:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:20.356 14:00:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:20.356 14:00:59 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:20.356 14:00:59 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:20.356 14:00:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:20.356 Cannot find device "nvmf_tgt_br" 00:18:20.356 14:00:59 -- nvmf/common.sh@155 -- # true 00:18:20.356 14:00:59 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:20.356 Cannot find device "nvmf_tgt_br2" 00:18:20.356 14:00:59 -- nvmf/common.sh@156 -- # true 00:18:20.356 14:00:59 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:20.356 14:01:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:20.356 Cannot find device "nvmf_tgt_br" 00:18:20.356 14:01:00 -- nvmf/common.sh@158 -- # true 00:18:20.356 14:01:00 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:20.614 Cannot find device "nvmf_tgt_br2" 00:18:20.614 14:01:00 -- nvmf/common.sh@159 -- # true 00:18:20.614 14:01:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:20.614 14:01:00 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:20.614 14:01:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:20.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:20.614 14:01:00 -- nvmf/common.sh@162 -- # true 00:18:20.614 14:01:00 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:20.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:20.614 14:01:00 -- nvmf/common.sh@163 -- # true 00:18:20.614 14:01:00 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:20.614 14:01:00 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:20.614 14:01:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:20.614 14:01:00 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:20.614 14:01:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:20.614 14:01:00 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:20.614 14:01:00 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:20.614 14:01:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:20.614 14:01:00 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:20.614 14:01:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:20.614 14:01:00 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:20.614 14:01:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:20.614 14:01:00 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:20.614 14:01:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:20.614 14:01:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:20.614 14:01:00 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:20.614 14:01:00 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:20.614 14:01:00 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:20.614 14:01:00 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:20.614 14:01:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:20.872 14:01:00 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:20.872 14:01:00 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:20.872 14:01:00 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:20.872 14:01:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:20.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:20.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:18:20.872 00:18:20.872 --- 10.0.0.2 ping statistics --- 00:18:20.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.872 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:18:20.872 14:01:00 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:20.872 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:20.872 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:18:20.872 00:18:20.872 --- 10.0.0.3 ping statistics --- 00:18:20.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.872 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:20.872 14:01:00 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:20.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:20.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:18:20.872 00:18:20.872 --- 10.0.0.1 ping statistics --- 00:18:20.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:20.872 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:18:20.872 14:01:00 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:20.872 14:01:00 -- nvmf/common.sh@422 -- # return 0 00:18:20.872 14:01:00 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:20.872 14:01:00 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:20.872 14:01:00 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:20.872 14:01:00 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:20.872 14:01:00 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:20.872 14:01:00 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:20.872 14:01:00 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:20.872 14:01:00 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:18:20.872 14:01:00 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:20.872 14:01:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:20.872 14:01:00 -- common/autotest_common.sh@10 -- # set +x 00:18:20.872 14:01:00 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:20.872 14:01:00 -- nvmf/common.sh@470 -- # nvmfpid=73316 00:18:20.872 14:01:00 -- nvmf/common.sh@471 -- # waitforlisten 73316 00:18:20.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.872 14:01:00 -- common/autotest_common.sh@817 -- # '[' -z 73316 ']' 00:18:20.872 14:01:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.872 14:01:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:20.872 14:01:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.872 14:01:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:20.872 14:01:00 -- common/autotest_common.sh@10 -- # set +x 00:18:20.872 [2024-04-26 14:01:00.470434] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:20.872 [2024-04-26 14:01:00.470550] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.131 [2024-04-26 14:01:00.645988] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:21.389 [2024-04-26 14:01:00.897904] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.389 [2024-04-26 14:01:00.897980] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.389 [2024-04-26 14:01:00.897999] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.389 [2024-04-26 14:01:00.898023] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.389 [2024-04-26 14:01:00.898052] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.389 [2024-04-26 14:01:00.898313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.389 [2024-04-26 14:01:00.898469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.389 [2024-04-26 14:01:00.898501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.957 14:01:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:21.957 14:01:01 -- common/autotest_common.sh@850 -- # return 0 00:18:21.957 14:01:01 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:21.957 14:01:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:21.957 14:01:01 -- common/autotest_common.sh@10 -- # set +x 00:18:21.957 14:01:01 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.957 14:01:01 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:22.215 [2024-04-26 14:01:01.672710] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.215 14:01:01 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:22.473 14:01:02 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:18:22.473 14:01:02 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:22.731 14:01:02 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:18:22.731 14:01:02 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:18:22.989 14:01:02 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:18:23.248 14:01:02 -- target/nvmf_lvol.sh@29 -- # lvs=a4f0da98-de0a-44c0-944d-4e041e537c0c 00:18:23.248 14:01:02 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a4f0da98-de0a-44c0-944d-4e041e537c0c lvol 20 00:18:23.506 14:01:02 -- target/nvmf_lvol.sh@32 -- # lvol=7f7aa0ca-4995-4ebb-bfc8-2fdb721b06b1 00:18:23.506 14:01:02 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:23.764 14:01:03 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7f7aa0ca-4995-4ebb-bfc8-2fdb721b06b1 00:18:23.764 14:01:03 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:24.021 [2024-04-26 14:01:03.556937] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:24.021 14:01:03 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:24.278 14:01:03 -- target/nvmf_lvol.sh@42 -- # perf_pid=73458 00:18:24.278 14:01:03 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:24.278 14:01:03 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:25.214 14:01:04 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 7f7aa0ca-4995-4ebb-bfc8-2fdb721b06b1 MY_SNAPSHOT 00:18:25.780 14:01:05 -- target/nvmf_lvol.sh@47 -- # snapshot=0de5a8d3-a905-43ec-8adf-7c1150425a87 00:18:25.780 14:01:05 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 7f7aa0ca-4995-4ebb-bfc8-2fdb721b06b1 30 00:18:26.038 14:01:05 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 0de5a8d3-a905-43ec-8adf-7c1150425a87 MY_CLONE 00:18:26.297 14:01:05 -- target/nvmf_lvol.sh@49 -- # clone=f6779ebd-693b-4b64-b750-c8b544e805cc 00:18:26.297 14:01:05 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate f6779ebd-693b-4b64-b750-c8b544e805cc 00:18:26.861 14:01:06 -- target/nvmf_lvol.sh@53 -- # wait 73458 00:18:35.023 Initializing NVMe Controllers 00:18:35.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:18:35.023 Controller IO queue size 128, less than required. 00:18:35.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:35.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:35.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:35.023 Initialization complete. Launching workers. 00:18:35.023 ======================================================== 00:18:35.023 Latency(us) 00:18:35.023 Device Information : IOPS MiB/s Average min max 00:18:35.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10289.51 40.19 12449.76 297.90 206891.15 00:18:35.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9961.52 38.91 12850.70 4560.54 167342.88 00:18:35.023 ======================================================== 00:18:35.023 Total : 20251.03 79.11 12646.98 297.90 206891.15 00:18:35.023 00:18:35.023 14:01:14 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:35.023 14:01:14 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7f7aa0ca-4995-4ebb-bfc8-2fdb721b06b1 00:18:35.023 14:01:14 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a4f0da98-de0a-44c0-944d-4e041e537c0c 00:18:35.281 14:01:14 -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:35.281 14:01:14 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:35.281 14:01:14 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:35.281 14:01:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:35.281 14:01:14 -- nvmf/common.sh@117 -- # sync 00:18:35.281 14:01:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:35.281 14:01:14 -- nvmf/common.sh@120 -- # set +e 00:18:35.281 14:01:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:35.281 14:01:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:35.281 rmmod nvme_tcp 00:18:35.282 rmmod nvme_fabrics 00:18:35.282 rmmod nvme_keyring 00:18:35.282 14:01:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:35.282 14:01:14 -- nvmf/common.sh@124 -- # set -e 00:18:35.282 14:01:14 -- nvmf/common.sh@125 -- # return 0 00:18:35.282 14:01:14 -- nvmf/common.sh@478 -- # '[' -n 73316 ']' 00:18:35.282 14:01:14 -- nvmf/common.sh@479 -- # killprocess 73316 00:18:35.282 14:01:14 -- common/autotest_common.sh@936 -- # '[' -z 73316 ']' 00:18:35.282 14:01:14 -- common/autotest_common.sh@940 -- # kill -0 73316 00:18:35.282 14:01:14 -- common/autotest_common.sh@941 -- # uname 00:18:35.282 14:01:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:35.282 14:01:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73316 00:18:35.282 killing process with pid 73316 00:18:35.282 14:01:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:35.282 14:01:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:35.282 14:01:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73316' 00:18:35.282 14:01:14 -- common/autotest_common.sh@955 -- # kill 73316 00:18:35.282 14:01:14 -- common/autotest_common.sh@960 -- # wait 73316 00:18:37.181 14:01:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:37.181 14:01:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:37.181 14:01:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:37.181 14:01:16 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:37.181 14:01:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:37.181 14:01:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.181 14:01:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.181 14:01:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.181 14:01:16 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:37.181 ************************************ 00:18:37.181 END TEST nvmf_lvol 00:18:37.181 ************************************ 00:18:37.181 00:18:37.181 real 0m16.946s 00:18:37.181 user 1m6.121s 00:18:37.181 sys 0m5.050s 00:18:37.182 14:01:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:37.182 14:01:16 -- common/autotest_common.sh@10 -- # set +x 00:18:37.182 14:01:16 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:37.182 14:01:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:37.182 14:01:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:37.182 14:01:16 -- common/autotest_common.sh@10 -- # set +x 00:18:37.441 ************************************ 00:18:37.441 START TEST nvmf_lvs_grow 00:18:37.441 ************************************ 00:18:37.441 14:01:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:18:37.441 * Looking for test storage... 00:18:37.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:37.441 14:01:17 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:37.441 14:01:17 -- nvmf/common.sh@7 -- # uname -s 00:18:37.441 14:01:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.441 14:01:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.441 14:01:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.441 14:01:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.441 14:01:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.441 14:01:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.441 14:01:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.441 14:01:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.441 14:01:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.441 14:01:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.441 14:01:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:18:37.441 14:01:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:18:37.441 14:01:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.441 14:01:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.441 14:01:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:37.441 14:01:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.441 14:01:17 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:37.441 14:01:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.441 14:01:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.441 14:01:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.441 14:01:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.441 14:01:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.441 14:01:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.441 14:01:17 -- paths/export.sh@5 -- # export PATH 00:18:37.441 14:01:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.441 14:01:17 -- nvmf/common.sh@47 -- # : 0 00:18:37.441 14:01:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:37.441 14:01:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:37.441 14:01:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.441 14:01:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.441 14:01:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.441 14:01:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:37.441 14:01:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:37.441 14:01:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:37.441 14:01:17 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:37.441 14:01:17 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:37.441 14:01:17 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:18:37.441 14:01:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:37.441 14:01:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.441 14:01:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:37.441 14:01:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:37.441 14:01:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:37.442 14:01:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.442 14:01:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.442 14:01:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.442 14:01:17 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:37.442 14:01:17 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:37.442 14:01:17 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:37.442 14:01:17 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:37.442 14:01:17 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:37.442 14:01:17 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:37.442 14:01:17 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.442 14:01:17 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.442 14:01:17 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:37.442 14:01:17 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:37.442 14:01:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:37.442 14:01:17 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:37.442 14:01:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:37.442 14:01:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.442 14:01:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:37.442 14:01:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:37.442 14:01:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:37.442 14:01:17 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:37.442 14:01:17 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:37.442 14:01:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:37.442 Cannot find device "nvmf_tgt_br" 00:18:37.442 14:01:17 -- nvmf/common.sh@155 -- # true 00:18:37.442 14:01:17 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:37.700 Cannot find device "nvmf_tgt_br2" 00:18:37.700 14:01:17 -- nvmf/common.sh@156 -- # true 00:18:37.700 14:01:17 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:37.700 14:01:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:37.700 Cannot find device "nvmf_tgt_br" 00:18:37.700 14:01:17 -- nvmf/common.sh@158 -- # true 00:18:37.700 14:01:17 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:37.700 Cannot find device "nvmf_tgt_br2" 00:18:37.700 14:01:17 -- nvmf/common.sh@159 -- # true 00:18:37.700 14:01:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:37.700 14:01:17 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:37.700 14:01:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:37.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:37.700 14:01:17 -- nvmf/common.sh@162 -- # true 00:18:37.700 14:01:17 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:37.700 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:37.700 14:01:17 -- nvmf/common.sh@163 -- # true 00:18:37.700 14:01:17 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:37.700 14:01:17 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:37.700 14:01:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:37.700 14:01:17 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:37.700 14:01:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:37.700 14:01:17 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:37.700 14:01:17 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:37.700 14:01:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:37.700 14:01:17 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:37.959 14:01:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:37.959 14:01:17 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:37.959 14:01:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:37.959 14:01:17 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:37.959 14:01:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:37.959 14:01:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:37.959 14:01:17 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:37.959 14:01:17 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:37.959 14:01:17 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:37.959 14:01:17 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:37.959 14:01:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:37.959 14:01:17 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:37.959 14:01:17 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:37.959 14:01:17 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:37.959 14:01:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:37.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:18:37.959 00:18:37.959 --- 10.0.0.2 ping statistics --- 00:18:37.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.959 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:37.959 14:01:17 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:37.959 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:37.959 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:18:37.959 00:18:37.959 --- 10.0.0.3 ping statistics --- 00:18:37.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.959 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:18:37.959 14:01:17 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:37.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:18:37.959 00:18:37.959 --- 10.0.0.1 ping statistics --- 00:18:37.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.959 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:37.959 14:01:17 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.959 14:01:17 -- nvmf/common.sh@422 -- # return 0 00:18:37.959 14:01:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:37.959 14:01:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.959 14:01:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:37.959 14:01:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:37.959 14:01:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.959 14:01:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:37.959 14:01:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:37.959 14:01:17 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:18:37.959 14:01:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:37.959 14:01:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:37.959 14:01:17 -- common/autotest_common.sh@10 -- # set +x 00:18:37.959 14:01:17 -- nvmf/common.sh@470 -- # nvmfpid=73838 00:18:37.959 14:01:17 -- nvmf/common.sh@471 -- # waitforlisten 73838 00:18:37.959 14:01:17 -- common/autotest_common.sh@817 -- # '[' -z 73838 ']' 00:18:37.959 14:01:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.959 14:01:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:37.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.959 14:01:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.959 14:01:17 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:37.959 14:01:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:37.959 14:01:17 -- common/autotest_common.sh@10 -- # set +x 00:18:37.959 [2024-04-26 14:01:17.613527] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:37.959 [2024-04-26 14:01:17.613648] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.220 [2024-04-26 14:01:17.789480] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.478 [2024-04-26 14:01:18.069066] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.478 [2024-04-26 14:01:18.069123] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.478 [2024-04-26 14:01:18.069139] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.478 [2024-04-26 14:01:18.069193] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.478 [2024-04-26 14:01:18.069207] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.478 [2024-04-26 14:01:18.069248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.045 14:01:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:39.045 14:01:18 -- common/autotest_common.sh@850 -- # return 0 00:18:39.045 14:01:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:39.045 14:01:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:39.045 14:01:18 -- common/autotest_common.sh@10 -- # set +x 00:18:39.045 14:01:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.045 14:01:18 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:39.303 [2024-04-26 14:01:18.785548] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.303 14:01:18 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:39.303 14:01:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:39.303 14:01:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:39.303 14:01:18 -- common/autotest_common.sh@10 -- # set +x 00:18:39.303 ************************************ 00:18:39.303 START TEST lvs_grow_clean 00:18:39.303 ************************************ 00:18:39.303 14:01:18 -- common/autotest_common.sh@1111 -- # lvs_grow 00:18:39.303 14:01:18 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:39.303 14:01:18 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:39.303 14:01:18 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:39.303 14:01:18 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:39.303 14:01:18 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:39.303 14:01:18 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:39.303 14:01:18 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:39.303 14:01:18 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:39.303 14:01:18 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:39.563 14:01:19 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:39.563 14:01:19 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:39.823 14:01:19 -- target/nvmf_lvs_grow.sh@28 -- # lvs=38c6680e-fe2d-4f9e-8ba1-83cfed4779d4 00:18:39.823 14:01:19 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38c6680e-fe2d-4f9e-8ba1-83cfed4779d4 00:18:39.823 14:01:19 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:40.081 14:01:19 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:40.081 14:01:19 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:40.081 14:01:19 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 38c6680e-fe2d-4f9e-8ba1-83cfed4779d4 lvol 150 00:18:40.339 14:01:19 -- target/nvmf_lvs_grow.sh@33 -- # lvol=9f21d9a0-d368-470c-ba4a-4ad24f0ee4c1 00:18:40.339 14:01:19 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:40.339 14:01:19 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:40.599 [2024-04-26 14:01:20.058676] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:40.599 [2024-04-26 14:01:20.058777] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:40.599 true 00:18:40.599 14:01:20 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38c6680e-fe2d-4f9e-8ba1-83cfed4779d4 00:18:40.599 14:01:20 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:40.858 14:01:20 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:40.858 14:01:20 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:40.858 14:01:20 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9f21d9a0-d368-470c-ba4a-4ad24f0ee4c1 00:18:41.116 14:01:20 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:41.375 [2024-04-26 14:01:20.877932] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.376 14:01:20 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:41.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.635 14:01:21 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74006 00:18:41.635 14:01:21 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:41.635 14:01:21 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:41.635 14:01:21 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74006 /var/tmp/bdevperf.sock 00:18:41.635 14:01:21 -- common/autotest_common.sh@817 -- # '[' -z 74006 ']' 00:18:41.635 14:01:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.635 14:01:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:41.635 14:01:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.635 14:01:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:41.635 14:01:21 -- common/autotest_common.sh@10 -- # set +x 00:18:41.635 [2024-04-26 14:01:21.184263] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:41.635 [2024-04-26 14:01:21.184376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74006 ] 00:18:41.894 [2024-04-26 14:01:21.349771] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.153 [2024-04-26 14:01:21.601856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.411 14:01:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:42.411 14:01:22 -- common/autotest_common.sh@850 -- # return 0 00:18:42.411 14:01:22 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:42.670 Nvme0n1 00:18:42.670 14:01:22 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:42.929 [ 00:18:42.929 { 00:18:42.929 "aliases": [ 00:18:42.929 "9f21d9a0-d368-470c-ba4a-4ad24f0ee4c1" 00:18:42.929 ], 00:18:42.929 "assigned_rate_limits": { 00:18:42.929 "r_mbytes_per_sec": 0, 00:18:42.929 "rw_ios_per_sec": 0, 00:18:42.929 "rw_mbytes_per_sec": 0, 00:18:42.929 "w_mbytes_per_sec": 0 00:18:42.929 }, 00:18:42.929 "block_size": 4096, 00:18:42.929 "claimed": false, 00:18:42.929 "driver_specific": { 00:18:42.929 "mp_policy": "active_passive", 00:18:42.929 "nvme": [ 00:18:42.929 { 00:18:42.929 "ctrlr_data": { 00:18:42.929 "ana_reporting": false, 00:18:42.929 "cntlid": 1, 00:18:42.929 "firmware_revision": "24.05", 00:18:42.929 "model_number": "SPDK bdev Controller", 00:18:42.929 "multi_ctrlr": true, 00:18:42.929 "oacs": { 00:18:42.929 "firmware": 0, 00:18:42.929 "format": 0, 00:18:42.929 "ns_manage": 0, 00:18:42.929 "security": 0 00:18:42.929 }, 00:18:42.929 "serial_number": "SPDK0", 00:18:42.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:42.929 "vendor_id": "0x8086" 00:18:42.929 }, 00:18:42.929 "ns_data": { 00:18:42.929 "can_share": true, 00:18:42.929 "id": 1 00:18:42.929 }, 00:18:42.929 "trid": { 00:18:42.929 "adrfam": "IPv4", 00:18:42.929 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:42.929 "traddr": "10.0.0.2", 00:18:42.929 "trsvcid": "4420", 00:18:42.929 "trtype": "TCP" 00:18:42.929 }, 00:18:42.929 "vs": { 00:18:42.929 "nvme_version": "1.3" 00:18:42.929 } 00:18:42.929 } 00:18:42.929 ] 00:18:42.929 }, 00:18:42.929 "memory_domains": [ 00:18:42.929 { 00:18:42.929 "dma_device_id": "system", 00:18:42.929 "dma_device_type": 1 00:18:42.929 } 00:18:42.929 ], 00:18:42.929 "name": "Nvme0n1", 00:18:42.929 "num_blocks": 38912, 00:18:42.929 "product_name": "NVMe disk", 00:18:42.929 "supported_io_types": { 00:18:42.929 "abort": true, 00:18:42.929 "compare": true, 00:18:42.929 "compare_and_write": true, 00:18:42.929 "flush": true, 00:18:42.929 "nvme_admin": true, 00:18:42.929 "nvme_io": true, 00:18:42.929 "read": true, 00:18:42.929 "reset": true, 00:18:42.929 "unmap": true, 00:18:42.929 "write": true, 00:18:42.929 "write_zeroes": true 00:18:42.929 }, 00:18:42.929 "uuid": "9f21d9a0-d368-470c-ba4a-4ad24f0ee4c1", 00:18:42.929 "zoned": false 00:18:42.929 } 00:18:42.929 ] 00:18:42.929 14:01:22 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:42.929 14:01:22 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74053 00:18:42.929 14:01:22 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:42.929 Running I/O for 10 seconds... 00:18:44.306 Latency(us) 00:18:44.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:44.306 Nvme0n1 : 1.00 8884.00 34.70 0.00 0.00 0.00 0.00 0.00 00:18:44.306 =================================================================================================================== 00:18:44.306 Total : 8884.00 34.70 0.00 0.00 0.00 0.00 0.00 00:18:44.306 00:18:44.900 14:01:24 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 38c6680e-fe2d-4f9e-8ba1-83cfed4779d4 00:18:45.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:45.159 Nvme0n1 : 2.00 8923.00 34.86 0.00 0.00 0.00 0.00 0.00 00:18:45.159 =================================================================================================================== 00:18:45.159 Total : 8923.00 34.86 0.00 0.00 0.00 0.00 0.00 00:18:45.159 00:18:45.159 true 00:18:45.159 14:01:24 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38c6680e-fe2d-4f9e-8ba1-83cfed4779d4 00:18:45.159 14:01:24 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:45.418 14:01:25 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:45.418 14:01:25 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:45.418 14:01:25 -- target/nvmf_lvs_grow.sh@65 -- # wait 74053 00:18:45.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:45.985 Nvme0n1 : 3.00 8991.67 35.12 0.00 0.00 0.00 0.00 0.00 00:18:45.985 =================================================================================================================== 00:18:45.985 Total : 8991.67 35.12 0.00 0.00 0.00 0.00 0.00 00:18:45.985 00:18:46.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:46.919 Nvme0n1 : 4.00 9011.50 35.20 0.00 0.00 0.00 0.00 0.00 00:18:46.919 =================================================================================================================== 00:18:46.919 Total : 9011.50 35.20 0.00 0.00 0.00 0.00 0.00 00:18:46.919 00:18:48.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:48.293 Nvme0n1 : 5.00 8990.60 35.12 0.00 0.00 0.00 0.00 0.00 00:18:48.293 =================================================================================================================== 00:18:48.293 Total : 8990.60 35.12 0.00 0.00 0.00 0.00 0.00 00:18:48.293 00:18:49.228 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:49.228 Nvme0n1 : 6.00 8964.33 35.02 0.00 0.00 0.00 0.00 0.00 00:18:49.228 =================================================================================================================== 00:18:49.228 Total : 8964.33 35.02 0.00 0.00 0.00 0.00 0.00 00:18:49.228 00:18:50.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:50.164 Nvme0n1 : 7.00 8960.00 35.00 0.00 0.00 0.00 0.00 0.00 00:18:50.164 =================================================================================================================== 00:18:50.164 Total : 8960.00 35.00 0.00 0.00 0.00 0.00 0.00 00:18:50.164 00:18:51.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:51.100 Nvme0n1 : 8.00 8949.62 34.96 0.00 0.00 0.00 0.00 0.00 00:18:51.100 =================================================================================================================== 00:18:51.100 Total : 8949.62 34.96 0.00 0.00 0.00 0.00 0.00 00:18:51.100 00:18:52.077 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:52.077 Nvme0n1 : 9.00 8950.00 34.96 0.00 0.00 0.00 0.00 0.00 00:18:52.077 =================================================================================================================== 00:18:52.077 Total : 8950.00 34.96 0.00 0.00 0.00 0.00 0.00 00:18:52.077 00:18:53.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:53.013 Nvme0n1 : 10.00 8938.10 34.91 0.00 0.00 0.00 0.00 0.00 00:18:53.013 =================================================================================================================== 00:18:53.014 Total : 8938.10 34.91 0.00 0.00 0.00 0.00 0.00 00:18:53.014 00:18:53.014 00:18:53.014 Latency(us) 00:18:53.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:53.014 Nvme0n1 : 10.00 8946.49 34.95 0.00 0.00 14302.86 5711.37 29688.60 00:18:53.014 =================================================================================================================== 00:18:53.014 Total : 8946.49 34.95 0.00 0.00 14302.86 5711.37 29688.60 00:18:53.014 0 00:18:53.014 14:01:32 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74006 00:18:53.014 14:01:32 -- common/autotest_common.sh@936 -- # '[' -z 74006 ']' 00:18:53.014 14:01:32 -- common/autotest_common.sh@940 -- # kill -0 74006 00:18:53.014 14:01:32 -- common/autotest_common.sh@941 -- # uname 00:18:53.014 14:01:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:53.014 14:01:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74006 00:18:53.014 14:01:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:53.014 14:01:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:53.014 killing process with pid 74006 00:18:53.014 Received shutdown signal, test time was about 10.000000 seconds 00:18:53.014 00:18:53.014 Latency(us) 00:18:53.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.014 =================================================================================================================== 00:18:53.014 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.014 14:01:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74006' 00:18:53.014 14:01:32 -- common/autotest_common.sh@955 -- # kill 74006 00:18:53.014 14:01:32 -- common/autotest_common.sh@960 -- # wait 74006 00:18:54.387 14:01:33 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:54.645 14:01:34 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38c6680e-fe2d-4f9e-8ba1-83cfed4779d4 00:18:54.645 14:01:34 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:54.645 14:01:34 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:54.645 14:01:34 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:54.645 14:01:34 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:54.902 [2024-04-26 14:01:34.468290] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:54.902 14:01:34 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38c6680e-fe2d-4f9e-8ba1-83cfed4779d4 00:18:54.902 14:01:34 -- common/autotest_common.sh@638 -- # local es=0 00:18:54.902 14:01:34 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38c6680e-fe2d-4f9e-8ba1-83cfed4779d4 00:18:54.902 14:01:34 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:54.902 14:01:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:54.902 14:01:34 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:54.902 14:01:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:54.902 14:01:34 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:54.902 14:01:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:54.902 14:01:34 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:54.902 14:01:34 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:54.902 14:01:34 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38c6680e-fe2d-4f9e-8ba1-83cfed4779d4 00:18:55.176 2024/04/26 14:01:34 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:38c6680e-fe2d-4f9e-8ba1-83cfed4779d4], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:18:55.176 request: 00:18:55.176 { 00:18:55.176 "method": "bdev_lvol_get_lvstores", 00:18:55.176 "params": { 00:18:55.176 "uuid": "38c6680e-fe2d-4f9e-8ba1-83cfed4779d4" 00:18:55.176 } 00:18:55.176 } 00:18:55.176 Got JSON-RPC error response 00:18:55.176 GoRPCClient: error on JSON-RPC call 00:18:55.176 14:01:34 -- common/autotest_common.sh@641 -- # es=1 00:18:55.176 14:01:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:55.176 14:01:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:55.176 14:01:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:55.176 14:01:34 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:55.437 aio_bdev 00:18:55.437 14:01:34 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 9f21d9a0-d368-470c-ba4a-4ad24f0ee4c1 00:18:55.437 14:01:34 -- common/autotest_common.sh@885 -- # local bdev_name=9f21d9a0-d368-470c-ba4a-4ad24f0ee4c1 00:18:55.437 14:01:34 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:55.437 14:01:34 -- common/autotest_common.sh@887 -- # local i 00:18:55.437 14:01:34 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:55.437 14:01:34 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:55.437 14:01:34 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:55.695 14:01:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9f21d9a0-d368-470c-ba4a-4ad24f0ee4c1 -t 2000 00:18:55.953 [ 00:18:55.953 { 00:18:55.953 "aliases": [ 00:18:55.953 "lvs/lvol" 00:18:55.953 ], 00:18:55.953 "assigned_rate_limits": { 00:18:55.953 "r_mbytes_per_sec": 0, 00:18:55.953 "rw_ios_per_sec": 0, 00:18:55.953 "rw_mbytes_per_sec": 0, 00:18:55.953 "w_mbytes_per_sec": 0 00:18:55.953 }, 00:18:55.953 "block_size": 4096, 00:18:55.953 "claimed": false, 00:18:55.953 "driver_specific": { 00:18:55.953 "lvol": { 00:18:55.953 "base_bdev": "aio_bdev", 00:18:55.953 "clone": false, 00:18:55.953 "esnap_clone": false, 00:18:55.953 "lvol_store_uuid": "38c6680e-fe2d-4f9e-8ba1-83cfed4779d4", 00:18:55.953 "snapshot": false, 00:18:55.953 "thin_provision": false 00:18:55.953 } 00:18:55.953 }, 00:18:55.953 "name": "9f21d9a0-d368-470c-ba4a-4ad24f0ee4c1", 00:18:55.953 "num_blocks": 38912, 00:18:55.953 "product_name": "Logical Volume", 00:18:55.953 "supported_io_types": { 00:18:55.954 "abort": false, 00:18:55.954 "compare": false, 00:18:55.954 "compare_and_write": false, 00:18:55.954 "flush": false, 00:18:55.954 "nvme_admin": false, 00:18:55.954 "nvme_io": false, 00:18:55.954 "read": true, 00:18:55.954 "reset": true, 00:18:55.954 "unmap": true, 00:18:55.954 "write": true, 00:18:55.954 "write_zeroes": true 00:18:55.954 }, 00:18:55.954 "uuid": "9f21d9a0-d368-470c-ba4a-4ad24f0ee4c1", 00:18:55.954 "zoned": false 00:18:55.954 } 00:18:55.954 ] 00:18:55.954 14:01:35 -- common/autotest_common.sh@893 -- # return 0 00:18:55.954 14:01:35 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38c6680e-fe2d-4f9e-8ba1-83cfed4779d4 00:18:55.954 14:01:35 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:55.954 14:01:35 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:55.954 14:01:35 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:55.954 14:01:35 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 38c6680e-fe2d-4f9e-8ba1-83cfed4779d4 00:18:56.212 14:01:35 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:56.212 14:01:35 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9f21d9a0-d368-470c-ba4a-4ad24f0ee4c1 00:18:56.469 14:01:36 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 38c6680e-fe2d-4f9e-8ba1-83cfed4779d4 00:18:56.728 14:01:36 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:56.986 14:01:36 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:57.243 ************************************ 00:18:57.243 END TEST lvs_grow_clean 00:18:57.243 ************************************ 00:18:57.243 00:18:57.243 real 0m17.976s 00:18:57.243 user 0m16.487s 00:18:57.243 sys 0m2.640s 00:18:57.243 14:01:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:57.243 14:01:36 -- common/autotest_common.sh@10 -- # set +x 00:18:57.501 14:01:36 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:57.501 14:01:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:57.501 14:01:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:57.501 14:01:36 -- common/autotest_common.sh@10 -- # set +x 00:18:57.501 ************************************ 00:18:57.501 START TEST lvs_grow_dirty 00:18:57.501 ************************************ 00:18:57.501 14:01:37 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:18:57.501 14:01:37 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:57.501 14:01:37 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:57.501 14:01:37 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:57.501 14:01:37 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:57.501 14:01:37 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:57.501 14:01:37 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:57.501 14:01:37 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:57.501 14:01:37 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:57.501 14:01:37 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:57.761 14:01:37 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:57.761 14:01:37 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:58.020 14:01:37 -- target/nvmf_lvs_grow.sh@28 -- # lvs=49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5 00:18:58.020 14:01:37 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:58.020 14:01:37 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5 00:18:58.278 14:01:37 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:58.278 14:01:37 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:58.278 14:01:37 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5 lvol 150 00:18:58.536 14:01:37 -- target/nvmf_lvs_grow.sh@33 -- # lvol=16698b95-edf9-443d-b5aa-593b1ad1fdcb 00:18:58.536 14:01:37 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:58.536 14:01:37 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:58.536 [2024-04-26 14:01:38.166646] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:58.536 [2024-04-26 14:01:38.166743] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:58.536 true 00:18:58.536 14:01:38 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5 00:18:58.536 14:01:38 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:58.801 14:01:38 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:58.801 14:01:38 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:59.072 14:01:38 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 16698b95-edf9-443d-b5aa-593b1ad1fdcb 00:18:59.331 14:01:38 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:59.589 14:01:39 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:59.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.590 14:01:39 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=74449 00:18:59.590 14:01:39 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:59.590 14:01:39 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:59.590 14:01:39 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 74449 /var/tmp/bdevperf.sock 00:18:59.590 14:01:39 -- common/autotest_common.sh@817 -- # '[' -z 74449 ']' 00:18:59.590 14:01:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.590 14:01:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:59.590 14:01:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.590 14:01:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:59.590 14:01:39 -- common/autotest_common.sh@10 -- # set +x 00:18:59.848 [2024-04-26 14:01:39.315031] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:18:59.848 [2024-04-26 14:01:39.315167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74449 ] 00:18:59.848 [2024-04-26 14:01:39.489208] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.107 [2024-04-26 14:01:39.730741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.673 14:01:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:00.673 14:01:40 -- common/autotest_common.sh@850 -- # return 0 00:19:00.674 14:01:40 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:19:00.932 Nvme0n1 00:19:00.932 14:01:40 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:19:01.190 [ 00:19:01.190 { 00:19:01.190 "aliases": [ 00:19:01.190 "16698b95-edf9-443d-b5aa-593b1ad1fdcb" 00:19:01.190 ], 00:19:01.190 "assigned_rate_limits": { 00:19:01.190 "r_mbytes_per_sec": 0, 00:19:01.190 "rw_ios_per_sec": 0, 00:19:01.190 "rw_mbytes_per_sec": 0, 00:19:01.190 "w_mbytes_per_sec": 0 00:19:01.190 }, 00:19:01.190 "block_size": 4096, 00:19:01.190 "claimed": false, 00:19:01.190 "driver_specific": { 00:19:01.190 "mp_policy": "active_passive", 00:19:01.190 "nvme": [ 00:19:01.190 { 00:19:01.190 "ctrlr_data": { 00:19:01.190 "ana_reporting": false, 00:19:01.190 "cntlid": 1, 00:19:01.190 "firmware_revision": "24.05", 00:19:01.190 "model_number": "SPDK bdev Controller", 00:19:01.190 "multi_ctrlr": true, 00:19:01.190 "oacs": { 00:19:01.190 "firmware": 0, 00:19:01.190 "format": 0, 00:19:01.190 "ns_manage": 0, 00:19:01.190 "security": 0 00:19:01.190 }, 00:19:01.190 "serial_number": "SPDK0", 00:19:01.190 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:01.190 "vendor_id": "0x8086" 00:19:01.190 }, 00:19:01.190 "ns_data": { 00:19:01.190 "can_share": true, 00:19:01.190 "id": 1 00:19:01.190 }, 00:19:01.190 "trid": { 00:19:01.190 "adrfam": "IPv4", 00:19:01.190 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:01.190 "traddr": "10.0.0.2", 00:19:01.190 "trsvcid": "4420", 00:19:01.190 "trtype": "TCP" 00:19:01.190 }, 00:19:01.190 "vs": { 00:19:01.191 "nvme_version": "1.3" 00:19:01.191 } 00:19:01.191 } 00:19:01.191 ] 00:19:01.191 }, 00:19:01.191 "memory_domains": [ 00:19:01.191 { 00:19:01.191 "dma_device_id": "system", 00:19:01.191 "dma_device_type": 1 00:19:01.191 } 00:19:01.191 ], 00:19:01.191 "name": "Nvme0n1", 00:19:01.191 "num_blocks": 38912, 00:19:01.191 "product_name": "NVMe disk", 00:19:01.191 "supported_io_types": { 00:19:01.191 "abort": true, 00:19:01.191 "compare": true, 00:19:01.191 "compare_and_write": true, 00:19:01.191 "flush": true, 00:19:01.191 "nvme_admin": true, 00:19:01.191 "nvme_io": true, 00:19:01.191 "read": true, 00:19:01.191 "reset": true, 00:19:01.191 "unmap": true, 00:19:01.191 "write": true, 00:19:01.191 "write_zeroes": true 00:19:01.191 }, 00:19:01.191 "uuid": "16698b95-edf9-443d-b5aa-593b1ad1fdcb", 00:19:01.191 "zoned": false 00:19:01.191 } 00:19:01.191 ] 00:19:01.191 14:01:40 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:01.191 14:01:40 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=74491 00:19:01.191 14:01:40 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:19:01.191 Running I/O for 10 seconds... 00:19:02.132 Latency(us) 00:19:02.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:02.132 Nvme0n1 : 1.00 8989.00 35.11 0.00 0.00 0.00 0.00 0.00 00:19:02.132 =================================================================================================================== 00:19:02.132 Total : 8989.00 35.11 0.00 0.00 0.00 0.00 0.00 00:19:02.132 00:19:03.077 14:01:42 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5 00:19:03.077 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:03.077 Nvme0n1 : 2.00 9085.00 35.49 0.00 0.00 0.00 0.00 0.00 00:19:03.077 =================================================================================================================== 00:19:03.077 Total : 9085.00 35.49 0.00 0.00 0.00 0.00 0.00 00:19:03.077 00:19:03.336 true 00:19:03.336 14:01:42 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5 00:19:03.336 14:01:42 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:19:03.595 14:01:43 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:19:03.595 14:01:43 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:19:03.595 14:01:43 -- target/nvmf_lvs_grow.sh@65 -- # wait 74491 00:19:04.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:04.161 Nvme0n1 : 3.00 9120.33 35.63 0.00 0.00 0.00 0.00 0.00 00:19:04.161 =================================================================================================================== 00:19:04.161 Total : 9120.33 35.63 0.00 0.00 0.00 0.00 0.00 00:19:04.161 00:19:05.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:05.099 Nvme0n1 : 4.00 9140.50 35.71 0.00 0.00 0.00 0.00 0.00 00:19:05.099 =================================================================================================================== 00:19:05.099 Total : 9140.50 35.71 0.00 0.00 0.00 0.00 0.00 00:19:05.099 00:19:06.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:06.058 Nvme0n1 : 5.00 9112.40 35.60 0.00 0.00 0.00 0.00 0.00 00:19:06.058 =================================================================================================================== 00:19:06.058 Total : 9112.40 35.60 0.00 0.00 0.00 0.00 0.00 00:19:06.058 00:19:07.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:07.437 Nvme0n1 : 6.00 8933.83 34.90 0.00 0.00 0.00 0.00 0.00 00:19:07.437 =================================================================================================================== 00:19:07.437 Total : 8933.83 34.90 0.00 0.00 0.00 0.00 0.00 00:19:07.437 00:19:08.373 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:08.373 Nvme0n1 : 7.00 8897.57 34.76 0.00 0.00 0.00 0.00 0.00 00:19:08.373 =================================================================================================================== 00:19:08.373 Total : 8897.57 34.76 0.00 0.00 0.00 0.00 0.00 00:19:08.373 00:19:09.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:09.309 Nvme0n1 : 8.00 8901.25 34.77 0.00 0.00 0.00 0.00 0.00 00:19:09.309 =================================================================================================================== 00:19:09.309 Total : 8901.25 34.77 0.00 0.00 0.00 0.00 0.00 00:19:09.309 00:19:10.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:10.296 Nvme0n1 : 9.00 8894.89 34.75 0.00 0.00 0.00 0.00 0.00 00:19:10.296 =================================================================================================================== 00:19:10.296 Total : 8894.89 34.75 0.00 0.00 0.00 0.00 0.00 00:19:10.296 00:19:11.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:11.239 Nvme0n1 : 10.00 8872.30 34.66 0.00 0.00 0.00 0.00 0.00 00:19:11.239 =================================================================================================================== 00:19:11.239 Total : 8872.30 34.66 0.00 0.00 0.00 0.00 0.00 00:19:11.239 00:19:11.239 00:19:11.239 Latency(us) 00:19:11.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:11.239 Nvme0n1 : 10.01 8876.09 34.67 0.00 0.00 14416.07 5211.30 95171.96 00:19:11.239 =================================================================================================================== 00:19:11.239 Total : 8876.09 34.67 0.00 0.00 14416.07 5211.30 95171.96 00:19:11.239 0 00:19:11.239 14:01:50 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 74449 00:19:11.239 14:01:50 -- common/autotest_common.sh@936 -- # '[' -z 74449 ']' 00:19:11.239 14:01:50 -- common/autotest_common.sh@940 -- # kill -0 74449 00:19:11.239 14:01:50 -- common/autotest_common.sh@941 -- # uname 00:19:11.239 14:01:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:11.239 14:01:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74449 00:19:11.239 killing process with pid 74449 00:19:11.239 Received shutdown signal, test time was about 10.000000 seconds 00:19:11.239 00:19:11.239 Latency(us) 00:19:11.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.239 =================================================================================================================== 00:19:11.239 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:11.239 14:01:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:11.239 14:01:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:11.239 14:01:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74449' 00:19:11.239 14:01:50 -- common/autotest_common.sh@955 -- # kill 74449 00:19:11.239 14:01:50 -- common/autotest_common.sh@960 -- # wait 74449 00:19:12.617 14:01:52 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:12.876 14:01:52 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5 00:19:12.876 14:01:52 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:19:13.135 14:01:52 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:19:13.135 14:01:52 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:19:13.135 14:01:52 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 73838 00:19:13.135 14:01:52 -- target/nvmf_lvs_grow.sh@74 -- # wait 73838 00:19:13.135 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 73838 Killed "${NVMF_APP[@]}" "$@" 00:19:13.135 14:01:52 -- target/nvmf_lvs_grow.sh@74 -- # true 00:19:13.135 14:01:52 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:19:13.135 14:01:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:13.135 14:01:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:13.135 14:01:52 -- common/autotest_common.sh@10 -- # set +x 00:19:13.135 14:01:52 -- nvmf/common.sh@470 -- # nvmfpid=74661 00:19:13.135 14:01:52 -- nvmf/common.sh@471 -- # waitforlisten 74661 00:19:13.135 14:01:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:13.135 14:01:52 -- common/autotest_common.sh@817 -- # '[' -z 74661 ']' 00:19:13.135 14:01:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.135 14:01:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:13.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.135 14:01:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.135 14:01:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:13.135 14:01:52 -- common/autotest_common.sh@10 -- # set +x 00:19:13.135 [2024-04-26 14:01:52.724303] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:13.135 [2024-04-26 14:01:52.724434] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.398 [2024-04-26 14:01:52.904239] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.656 [2024-04-26 14:01:53.150226] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.656 [2024-04-26 14:01:53.150287] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.656 [2024-04-26 14:01:53.150303] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:13.656 [2024-04-26 14:01:53.150326] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:13.656 [2024-04-26 14:01:53.150340] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.656 [2024-04-26 14:01:53.150383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.938 14:01:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:13.938 14:01:53 -- common/autotest_common.sh@850 -- # return 0 00:19:13.938 14:01:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:13.938 14:01:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:13.938 14:01:53 -- common/autotest_common.sh@10 -- # set +x 00:19:14.202 14:01:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:14.202 14:01:53 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:14.202 [2024-04-26 14:01:53.839374] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:19:14.202 [2024-04-26 14:01:53.839698] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:19:14.202 [2024-04-26 14:01:53.839938] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:19:14.462 14:01:53 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:19:14.462 14:01:53 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 16698b95-edf9-443d-b5aa-593b1ad1fdcb 00:19:14.462 14:01:53 -- common/autotest_common.sh@885 -- # local bdev_name=16698b95-edf9-443d-b5aa-593b1ad1fdcb 00:19:14.462 14:01:53 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:14.462 14:01:53 -- common/autotest_common.sh@887 -- # local i 00:19:14.462 14:01:53 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:14.462 14:01:53 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:14.462 14:01:53 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:14.462 14:01:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 16698b95-edf9-443d-b5aa-593b1ad1fdcb -t 2000 00:19:14.721 [ 00:19:14.721 { 00:19:14.721 "aliases": [ 00:19:14.721 "lvs/lvol" 00:19:14.721 ], 00:19:14.721 "assigned_rate_limits": { 00:19:14.721 "r_mbytes_per_sec": 0, 00:19:14.721 "rw_ios_per_sec": 0, 00:19:14.721 "rw_mbytes_per_sec": 0, 00:19:14.721 "w_mbytes_per_sec": 0 00:19:14.721 }, 00:19:14.721 "block_size": 4096, 00:19:14.721 "claimed": false, 00:19:14.721 "driver_specific": { 00:19:14.721 "lvol": { 00:19:14.721 "base_bdev": "aio_bdev", 00:19:14.721 "clone": false, 00:19:14.721 "esnap_clone": false, 00:19:14.721 "lvol_store_uuid": "49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5", 00:19:14.721 "snapshot": false, 00:19:14.721 "thin_provision": false 00:19:14.721 } 00:19:14.721 }, 00:19:14.721 "name": "16698b95-edf9-443d-b5aa-593b1ad1fdcb", 00:19:14.721 "num_blocks": 38912, 00:19:14.721 "product_name": "Logical Volume", 00:19:14.721 "supported_io_types": { 00:19:14.721 "abort": false, 00:19:14.721 "compare": false, 00:19:14.721 "compare_and_write": false, 00:19:14.721 "flush": false, 00:19:14.721 "nvme_admin": false, 00:19:14.721 "nvme_io": false, 00:19:14.721 "read": true, 00:19:14.721 "reset": true, 00:19:14.721 "unmap": true, 00:19:14.721 "write": true, 00:19:14.721 "write_zeroes": true 00:19:14.721 }, 00:19:14.721 "uuid": "16698b95-edf9-443d-b5aa-593b1ad1fdcb", 00:19:14.721 "zoned": false 00:19:14.721 } 00:19:14.721 ] 00:19:14.721 14:01:54 -- common/autotest_common.sh@893 -- # return 0 00:19:14.721 14:01:54 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:19:14.721 14:01:54 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5 00:19:14.980 14:01:54 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:19:14.980 14:01:54 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5 00:19:14.980 14:01:54 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:19:15.239 14:01:54 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:19:15.239 14:01:54 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:15.498 [2024-04-26 14:01:54.958845] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:15.498 14:01:55 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5 00:19:15.498 14:01:55 -- common/autotest_common.sh@638 -- # local es=0 00:19:15.498 14:01:55 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5 00:19:15.498 14:01:55 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:15.498 14:01:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:15.498 14:01:55 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:15.498 14:01:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:15.498 14:01:55 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:15.498 14:01:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:15.498 14:01:55 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:15.498 14:01:55 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:15.498 14:01:55 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5 00:19:15.757 2024/04/26 14:01:55 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:19:15.757 request: 00:19:15.757 { 00:19:15.757 "method": "bdev_lvol_get_lvstores", 00:19:15.757 "params": { 00:19:15.757 "uuid": "49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5" 00:19:15.757 } 00:19:15.757 } 00:19:15.757 Got JSON-RPC error response 00:19:15.757 GoRPCClient: error on JSON-RPC call 00:19:15.757 14:01:55 -- common/autotest_common.sh@641 -- # es=1 00:19:15.757 14:01:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:15.757 14:01:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:15.757 14:01:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:15.757 14:01:55 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:15.757 aio_bdev 00:19:16.016 14:01:55 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 16698b95-edf9-443d-b5aa-593b1ad1fdcb 00:19:16.016 14:01:55 -- common/autotest_common.sh@885 -- # local bdev_name=16698b95-edf9-443d-b5aa-593b1ad1fdcb 00:19:16.016 14:01:55 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:16.016 14:01:55 -- common/autotest_common.sh@887 -- # local i 00:19:16.016 14:01:55 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:16.016 14:01:55 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:16.016 14:01:55 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:16.016 14:01:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 16698b95-edf9-443d-b5aa-593b1ad1fdcb -t 2000 00:19:16.275 [ 00:19:16.275 { 00:19:16.275 "aliases": [ 00:19:16.275 "lvs/lvol" 00:19:16.275 ], 00:19:16.275 "assigned_rate_limits": { 00:19:16.275 "r_mbytes_per_sec": 0, 00:19:16.275 "rw_ios_per_sec": 0, 00:19:16.275 "rw_mbytes_per_sec": 0, 00:19:16.275 "w_mbytes_per_sec": 0 00:19:16.275 }, 00:19:16.275 "block_size": 4096, 00:19:16.275 "claimed": false, 00:19:16.275 "driver_specific": { 00:19:16.275 "lvol": { 00:19:16.275 "base_bdev": "aio_bdev", 00:19:16.275 "clone": false, 00:19:16.275 "esnap_clone": false, 00:19:16.275 "lvol_store_uuid": "49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5", 00:19:16.275 "snapshot": false, 00:19:16.275 "thin_provision": false 00:19:16.275 } 00:19:16.275 }, 00:19:16.275 "name": "16698b95-edf9-443d-b5aa-593b1ad1fdcb", 00:19:16.275 "num_blocks": 38912, 00:19:16.275 "product_name": "Logical Volume", 00:19:16.275 "supported_io_types": { 00:19:16.275 "abort": false, 00:19:16.275 "compare": false, 00:19:16.275 "compare_and_write": false, 00:19:16.275 "flush": false, 00:19:16.275 "nvme_admin": false, 00:19:16.275 "nvme_io": false, 00:19:16.275 "read": true, 00:19:16.275 "reset": true, 00:19:16.275 "unmap": true, 00:19:16.275 "write": true, 00:19:16.275 "write_zeroes": true 00:19:16.275 }, 00:19:16.275 "uuid": "16698b95-edf9-443d-b5aa-593b1ad1fdcb", 00:19:16.275 "zoned": false 00:19:16.275 } 00:19:16.275 ] 00:19:16.275 14:01:55 -- common/autotest_common.sh@893 -- # return 0 00:19:16.275 14:01:55 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5 00:19:16.275 14:01:55 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:19:16.649 14:01:56 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:19:16.649 14:01:56 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5 00:19:16.649 14:01:56 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:19:16.649 14:01:56 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:19:16.649 14:01:56 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 16698b95-edf9-443d-b5aa-593b1ad1fdcb 00:19:16.908 14:01:56 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 49dbe7c5-0bf4-4252-a3ce-7f1e47f403a5 00:19:17.166 14:01:56 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:17.424 14:01:56 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:19:17.682 ************************************ 00:19:17.682 END TEST lvs_grow_dirty 00:19:17.682 ************************************ 00:19:17.682 00:19:17.682 real 0m20.273s 00:19:17.682 user 0m42.039s 00:19:17.682 sys 0m7.797s 00:19:17.682 14:01:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:17.682 14:01:57 -- common/autotest_common.sh@10 -- # set +x 00:19:17.939 14:01:57 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:19:17.939 14:01:57 -- common/autotest_common.sh@794 -- # type=--id 00:19:17.939 14:01:57 -- common/autotest_common.sh@795 -- # id=0 00:19:17.939 14:01:57 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:19:17.939 14:01:57 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:17.939 14:01:57 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:19:17.939 14:01:57 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:19:17.939 14:01:57 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:19:17.939 14:01:57 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:17.939 nvmf_trace.0 00:19:17.939 14:01:57 -- common/autotest_common.sh@809 -- # return 0 00:19:17.939 14:01:57 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:19:17.939 14:01:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:17.939 14:01:57 -- nvmf/common.sh@117 -- # sync 00:19:18.198 14:01:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:18.198 14:01:57 -- nvmf/common.sh@120 -- # set +e 00:19:18.198 14:01:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:18.198 14:01:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:18.198 rmmod nvme_tcp 00:19:18.198 rmmod nvme_fabrics 00:19:18.198 rmmod nvme_keyring 00:19:18.198 14:01:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:18.198 14:01:57 -- nvmf/common.sh@124 -- # set -e 00:19:18.198 14:01:57 -- nvmf/common.sh@125 -- # return 0 00:19:18.198 14:01:57 -- nvmf/common.sh@478 -- # '[' -n 74661 ']' 00:19:18.198 14:01:57 -- nvmf/common.sh@479 -- # killprocess 74661 00:19:18.198 14:01:57 -- common/autotest_common.sh@936 -- # '[' -z 74661 ']' 00:19:18.198 14:01:57 -- common/autotest_common.sh@940 -- # kill -0 74661 00:19:18.198 14:01:57 -- common/autotest_common.sh@941 -- # uname 00:19:18.198 14:01:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:18.198 14:01:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74661 00:19:18.198 14:01:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:18.198 14:01:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:18.198 14:01:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74661' 00:19:18.198 killing process with pid 74661 00:19:18.198 14:01:57 -- common/autotest_common.sh@955 -- # kill 74661 00:19:18.198 14:01:57 -- common/autotest_common.sh@960 -- # wait 74661 00:19:19.573 14:01:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:19.573 14:01:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:19.573 14:01:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:19.573 14:01:59 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:19.573 14:01:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:19.573 14:01:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.573 14:01:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.573 14:01:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.573 14:01:59 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:19.573 00:19:19.573 real 0m42.285s 00:19:19.573 user 1m5.407s 00:19:19.573 sys 0m11.518s 00:19:19.573 14:01:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:19.573 14:01:59 -- common/autotest_common.sh@10 -- # set +x 00:19:19.573 ************************************ 00:19:19.573 END TEST nvmf_lvs_grow 00:19:19.573 ************************************ 00:19:19.573 14:01:59 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:19.573 14:01:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:19.573 14:01:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:19.573 14:01:59 -- common/autotest_common.sh@10 -- # set +x 00:19:19.832 ************************************ 00:19:19.832 START TEST nvmf_bdev_io_wait 00:19:19.832 ************************************ 00:19:19.832 14:01:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:19:19.832 * Looking for test storage... 00:19:19.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:19.832 14:01:59 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:19.832 14:01:59 -- nvmf/common.sh@7 -- # uname -s 00:19:19.832 14:01:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:19.832 14:01:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:19.832 14:01:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:19.832 14:01:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:19.832 14:01:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:19.832 14:01:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:19.832 14:01:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:19.832 14:01:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:19.832 14:01:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:19.832 14:01:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:19.832 14:01:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:19:19.832 14:01:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:19:19.832 14:01:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:19.832 14:01:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:19.832 14:01:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:19.832 14:01:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:19.832 14:01:59 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:19.832 14:01:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:19.832 14:01:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:19.832 14:01:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:19.832 14:01:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.832 14:01:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.832 14:01:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.832 14:01:59 -- paths/export.sh@5 -- # export PATH 00:19:19.832 14:01:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:19.832 14:01:59 -- nvmf/common.sh@47 -- # : 0 00:19:19.832 14:01:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:19.832 14:01:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:19.832 14:01:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:19.832 14:01:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:19.832 14:01:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:19.832 14:01:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:19.832 14:01:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:19.832 14:01:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:19.832 14:01:59 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:19.832 14:01:59 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:19.832 14:01:59 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:19:19.832 14:01:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:19.832 14:01:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:19.832 14:01:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:19.832 14:01:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:19.832 14:01:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:19.832 14:01:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.832 14:01:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:19.832 14:01:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:19.832 14:01:59 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:19.832 14:01:59 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:19.832 14:01:59 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:19.832 14:01:59 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:19.832 14:01:59 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:19.832 14:01:59 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:19.833 14:01:59 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:19.833 14:01:59 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:19.833 14:01:59 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:19.833 14:01:59 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:19.833 14:01:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:19.833 14:01:59 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:19.833 14:01:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:19.833 14:01:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:19.833 14:01:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:19.833 14:01:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:19.833 14:01:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:19.833 14:01:59 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:19.833 14:01:59 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:20.091 14:01:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:20.091 Cannot find device "nvmf_tgt_br" 00:19:20.091 14:01:59 -- nvmf/common.sh@155 -- # true 00:19:20.091 14:01:59 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:20.091 Cannot find device "nvmf_tgt_br2" 00:19:20.091 14:01:59 -- nvmf/common.sh@156 -- # true 00:19:20.091 14:01:59 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:20.091 14:01:59 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:20.091 Cannot find device "nvmf_tgt_br" 00:19:20.091 14:01:59 -- nvmf/common.sh@158 -- # true 00:19:20.091 14:01:59 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:20.091 Cannot find device "nvmf_tgt_br2" 00:19:20.091 14:01:59 -- nvmf/common.sh@159 -- # true 00:19:20.091 14:01:59 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:20.091 14:01:59 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:20.091 14:01:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:20.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:20.091 14:01:59 -- nvmf/common.sh@162 -- # true 00:19:20.091 14:01:59 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:20.091 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:20.091 14:01:59 -- nvmf/common.sh@163 -- # true 00:19:20.091 14:01:59 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:20.091 14:01:59 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:20.091 14:01:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:20.091 14:01:59 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:20.091 14:01:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:20.349 14:01:59 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:20.349 14:01:59 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:20.349 14:01:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:20.349 14:01:59 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:20.349 14:01:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:20.349 14:01:59 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:20.349 14:01:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:20.349 14:01:59 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:20.349 14:01:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:20.349 14:01:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:20.349 14:01:59 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:20.349 14:01:59 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:20.349 14:01:59 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:20.349 14:01:59 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:20.349 14:01:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:20.349 14:01:59 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:20.349 14:01:59 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:20.349 14:01:59 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:20.349 14:01:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:20.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:20.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:19:20.349 00:19:20.349 --- 10.0.0.2 ping statistics --- 00:19:20.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.349 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:19:20.349 14:01:59 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:20.349 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:20.349 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:19:20.349 00:19:20.349 --- 10.0.0.3 ping statistics --- 00:19:20.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.349 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:19:20.349 14:01:59 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:20.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:20.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:19:20.349 00:19:20.349 --- 10.0.0.1 ping statistics --- 00:19:20.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:20.349 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:19:20.349 14:01:59 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:20.349 14:01:59 -- nvmf/common.sh@422 -- # return 0 00:19:20.349 14:01:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:20.349 14:01:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:20.349 14:01:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:20.349 14:01:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:20.349 14:01:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:20.349 14:01:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:20.349 14:01:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:20.349 14:01:59 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:20.349 14:01:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:20.349 14:01:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:20.349 14:01:59 -- common/autotest_common.sh@10 -- # set +x 00:19:20.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.349 14:02:00 -- nvmf/common.sh@470 -- # nvmfpid=75088 00:19:20.349 14:02:00 -- nvmf/common.sh@471 -- # waitforlisten 75088 00:19:20.349 14:02:00 -- common/autotest_common.sh@817 -- # '[' -z 75088 ']' 00:19:20.349 14:02:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.349 14:02:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:20.349 14:02:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.349 14:02:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:20.349 14:02:00 -- common/autotest_common.sh@10 -- # set +x 00:19:20.349 14:02:00 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:20.607 [2024-04-26 14:02:00.099117] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:20.607 [2024-04-26 14:02:00.099254] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.607 [2024-04-26 14:02:00.275472] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:20.866 [2024-04-26 14:02:00.524608] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.866 [2024-04-26 14:02:00.524668] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.866 [2024-04-26 14:02:00.524684] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.866 [2024-04-26 14:02:00.524695] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.866 [2024-04-26 14:02:00.524707] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.866 [2024-04-26 14:02:00.524890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.866 [2024-04-26 14:02:00.525261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.866 [2024-04-26 14:02:00.525860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.866 [2024-04-26 14:02:00.525891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:21.434 14:02:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:21.434 14:02:00 -- common/autotest_common.sh@850 -- # return 0 00:19:21.434 14:02:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:21.434 14:02:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:21.434 14:02:00 -- common/autotest_common.sh@10 -- # set +x 00:19:21.434 14:02:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.434 14:02:00 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:21.434 14:02:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.434 14:02:00 -- common/autotest_common.sh@10 -- # set +x 00:19:21.434 14:02:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.434 14:02:01 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:21.434 14:02:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.434 14:02:01 -- common/autotest_common.sh@10 -- # set +x 00:19:21.694 14:02:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.694 14:02:01 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:21.694 14:02:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.694 14:02:01 -- common/autotest_common.sh@10 -- # set +x 00:19:21.694 [2024-04-26 14:02:01.297104] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:21.694 14:02:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.694 14:02:01 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:21.694 14:02:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.694 14:02:01 -- common/autotest_common.sh@10 -- # set +x 00:19:21.954 Malloc0 00:19:21.954 14:02:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.954 14:02:01 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:21.954 14:02:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.954 14:02:01 -- common/autotest_common.sh@10 -- # set +x 00:19:21.954 14:02:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.954 14:02:01 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:21.954 14:02:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.954 14:02:01 -- common/autotest_common.sh@10 -- # set +x 00:19:21.954 14:02:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.954 14:02:01 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:21.954 14:02:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:21.954 14:02:01 -- common/autotest_common.sh@10 -- # set +x 00:19:21.954 [2024-04-26 14:02:01.432334] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:21.954 14:02:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:21.954 14:02:01 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=75146 00:19:21.954 14:02:01 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:21.954 14:02:01 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:21.954 14:02:01 -- nvmf/common.sh@521 -- # config=() 00:19:21.954 14:02:01 -- nvmf/common.sh@521 -- # local subsystem config 00:19:21.954 14:02:01 -- target/bdev_io_wait.sh@30 -- # READ_PID=75148 00:19:21.954 14:02:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:21.954 14:02:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:21.954 { 00:19:21.954 "params": { 00:19:21.954 "name": "Nvme$subsystem", 00:19:21.954 "trtype": "$TEST_TRANSPORT", 00:19:21.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:21.954 "adrfam": "ipv4", 00:19:21.954 "trsvcid": "$NVMF_PORT", 00:19:21.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:21.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:21.954 "hdgst": ${hdgst:-false}, 00:19:21.954 "ddgst": ${ddgst:-false} 00:19:21.954 }, 00:19:21.954 "method": "bdev_nvme_attach_controller" 00:19:21.954 } 00:19:21.954 EOF 00:19:21.954 )") 00:19:21.954 14:02:01 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:21.954 14:02:01 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:21.954 14:02:01 -- nvmf/common.sh@521 -- # config=() 00:19:21.954 14:02:01 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=75150 00:19:21.954 14:02:01 -- nvmf/common.sh@521 -- # local subsystem config 00:19:21.954 14:02:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:21.954 14:02:01 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:21.954 14:02:01 -- nvmf/common.sh@543 -- # cat 00:19:21.954 14:02:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:21.954 { 00:19:21.954 "params": { 00:19:21.954 "name": "Nvme$subsystem", 00:19:21.954 "trtype": "$TEST_TRANSPORT", 00:19:21.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:21.954 "adrfam": "ipv4", 00:19:21.954 "trsvcid": "$NVMF_PORT", 00:19:21.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:21.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:21.954 "hdgst": ${hdgst:-false}, 00:19:21.954 "ddgst": ${ddgst:-false} 00:19:21.954 }, 00:19:21.954 "method": "bdev_nvme_attach_controller" 00:19:21.954 } 00:19:21.954 EOF 00:19:21.954 )") 00:19:21.954 14:02:01 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:21.954 14:02:01 -- nvmf/common.sh@521 -- # config=() 00:19:21.954 14:02:01 -- nvmf/common.sh@521 -- # local subsystem config 00:19:21.954 14:02:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:21.954 14:02:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:21.954 { 00:19:21.954 "params": { 00:19:21.954 "name": "Nvme$subsystem", 00:19:21.954 "trtype": "$TEST_TRANSPORT", 00:19:21.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:21.954 "adrfam": "ipv4", 00:19:21.954 "trsvcid": "$NVMF_PORT", 00:19:21.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:21.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:21.954 "hdgst": ${hdgst:-false}, 00:19:21.954 "ddgst": ${ddgst:-false} 00:19:21.954 }, 00:19:21.954 "method": "bdev_nvme_attach_controller" 00:19:21.954 } 00:19:21.954 EOF 00:19:21.954 )") 00:19:21.954 14:02:01 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=75153 00:19:21.954 14:02:01 -- target/bdev_io_wait.sh@35 -- # sync 00:19:21.954 14:02:01 -- nvmf/common.sh@543 -- # cat 00:19:21.954 14:02:01 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:21.954 14:02:01 -- nvmf/common.sh@543 -- # cat 00:19:21.954 14:02:01 -- nvmf/common.sh@521 -- # config=() 00:19:21.954 14:02:01 -- nvmf/common.sh@521 -- # local subsystem config 00:19:21.954 14:02:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:21.954 14:02:01 -- nvmf/common.sh@545 -- # jq . 00:19:21.954 14:02:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:21.954 { 00:19:21.954 "params": { 00:19:21.954 "name": "Nvme$subsystem", 00:19:21.954 "trtype": "$TEST_TRANSPORT", 00:19:21.954 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:21.954 "adrfam": "ipv4", 00:19:21.954 "trsvcid": "$NVMF_PORT", 00:19:21.954 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:21.954 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:21.954 "hdgst": ${hdgst:-false}, 00:19:21.954 "ddgst": ${ddgst:-false} 00:19:21.954 }, 00:19:21.954 "method": "bdev_nvme_attach_controller" 00:19:21.954 } 00:19:21.954 EOF 00:19:21.954 )") 00:19:21.954 14:02:01 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:21.954 14:02:01 -- nvmf/common.sh@545 -- # jq . 00:19:21.954 14:02:01 -- nvmf/common.sh@543 -- # cat 00:19:21.954 14:02:01 -- nvmf/common.sh@546 -- # IFS=, 00:19:21.954 14:02:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:21.954 "params": { 00:19:21.954 "name": "Nvme1", 00:19:21.954 "trtype": "tcp", 00:19:21.954 "traddr": "10.0.0.2", 00:19:21.954 "adrfam": "ipv4", 00:19:21.954 "trsvcid": "4420", 00:19:21.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.954 "hdgst": false, 00:19:21.954 "ddgst": false 00:19:21.954 }, 00:19:21.954 "method": "bdev_nvme_attach_controller" 00:19:21.954 }' 00:19:21.954 14:02:01 -- nvmf/common.sh@545 -- # jq . 00:19:21.954 14:02:01 -- nvmf/common.sh@545 -- # jq . 00:19:21.954 14:02:01 -- nvmf/common.sh@546 -- # IFS=, 00:19:21.954 14:02:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:21.954 "params": { 00:19:21.954 "name": "Nvme1", 00:19:21.955 "trtype": "tcp", 00:19:21.955 "traddr": "10.0.0.2", 00:19:21.955 "adrfam": "ipv4", 00:19:21.955 "trsvcid": "4420", 00:19:21.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.955 "hdgst": false, 00:19:21.955 "ddgst": false 00:19:21.955 }, 00:19:21.955 "method": "bdev_nvme_attach_controller" 00:19:21.955 }' 00:19:21.955 14:02:01 -- nvmf/common.sh@546 -- # IFS=, 00:19:21.955 14:02:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:21.955 "params": { 00:19:21.955 "name": "Nvme1", 00:19:21.955 "trtype": "tcp", 00:19:21.955 "traddr": "10.0.0.2", 00:19:21.955 "adrfam": "ipv4", 00:19:21.955 "trsvcid": "4420", 00:19:21.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.955 "hdgst": false, 00:19:21.955 "ddgst": false 00:19:21.955 }, 00:19:21.955 "method": "bdev_nvme_attach_controller" 00:19:21.955 }' 00:19:21.955 14:02:01 -- nvmf/common.sh@546 -- # IFS=, 00:19:21.955 14:02:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:21.955 "params": { 00:19:21.955 "name": "Nvme1", 00:19:21.955 "trtype": "tcp", 00:19:21.955 "traddr": "10.0.0.2", 00:19:21.955 "adrfam": "ipv4", 00:19:21.955 "trsvcid": "4420", 00:19:21.955 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.955 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.955 "hdgst": false, 00:19:21.955 "ddgst": false 00:19:21.955 }, 00:19:21.955 "method": "bdev_nvme_attach_controller" 00:19:21.955 }' 00:19:21.955 14:02:01 -- target/bdev_io_wait.sh@37 -- # wait 75146 00:19:21.955 [2024-04-26 14:02:01.536515] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:21.955 [2024-04-26 14:02:01.536825] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:21.955 [2024-04-26 14:02:01.541858] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:21.955 [2024-04-26 14:02:01.542309] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:21.955 [2024-04-26 14:02:01.542301] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:21.955 [2024-04-26 14:02:01.542391] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:21.955 [2024-04-26 14:02:01.553802] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:21.955 [2024-04-26 14:02:01.553917] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:22.213 [2024-04-26 14:02:01.766832] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.213 [2024-04-26 14:02:01.833741] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.472 [2024-04-26 14:02:01.898636] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.472 [2024-04-26 14:02:01.971069] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.472 [2024-04-26 14:02:01.994136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:22.472 [2024-04-26 14:02:02.071719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:22.472 [2024-04-26 14:02:02.133834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:19:22.731 [2024-04-26 14:02:02.203006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:23.000 Running I/O for 1 seconds... 00:19:23.000 Running I/O for 1 seconds... 00:19:23.000 Running I/O for 1 seconds... 00:19:23.265 Running I/O for 1 seconds... 00:19:23.832 00:19:23.833 Latency(us) 00:19:23.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.833 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:23.833 Nvme1n1 : 1.00 203853.39 796.30 0.00 0.00 625.61 292.81 2145.05 00:19:23.833 =================================================================================================================== 00:19:23.833 Total : 203853.39 796.30 0.00 0.00 625.61 292.81 2145.05 00:19:24.091 00:19:24.091 Latency(us) 00:19:24.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.091 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:24.091 Nvme1n1 : 1.01 9141.55 35.71 0.00 0.00 13935.29 7895.90 18634.33 00:19:24.091 =================================================================================================================== 00:19:24.091 Total : 9141.55 35.71 0.00 0.00 13935.29 7895.90 18634.33 00:19:24.091 00:19:24.091 Latency(us) 00:19:24.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.091 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:24.091 Nvme1n1 : 1.01 8578.63 33.51 0.00 0.00 14859.49 6843.12 24319.38 00:19:24.091 =================================================================================================================== 00:19:24.091 Total : 8578.63 33.51 0.00 0.00 14859.49 6843.12 24319.38 00:19:24.091 00:19:24.091 Latency(us) 00:19:24.091 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.091 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:24.091 Nvme1n1 : 1.01 7645.14 29.86 0.00 0.00 16674.08 3263.64 25372.17 00:19:24.091 =================================================================================================================== 00:19:24.091 Total : 7645.14 29.86 0.00 0.00 16674.08 3263.64 25372.17 00:19:25.470 14:02:05 -- target/bdev_io_wait.sh@38 -- # wait 75148 00:19:25.470 14:02:05 -- target/bdev_io_wait.sh@39 -- # wait 75150 00:19:25.470 14:02:05 -- target/bdev_io_wait.sh@40 -- # wait 75153 00:19:25.470 14:02:05 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:25.470 14:02:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:25.470 14:02:05 -- common/autotest_common.sh@10 -- # set +x 00:19:25.470 14:02:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:25.470 14:02:05 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:25.470 14:02:05 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:25.470 14:02:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:25.470 14:02:05 -- nvmf/common.sh@117 -- # sync 00:19:25.470 14:02:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:25.470 14:02:05 -- nvmf/common.sh@120 -- # set +e 00:19:25.470 14:02:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:25.470 14:02:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:25.470 rmmod nvme_tcp 00:19:25.470 rmmod nvme_fabrics 00:19:25.470 rmmod nvme_keyring 00:19:25.730 14:02:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:25.730 14:02:05 -- nvmf/common.sh@124 -- # set -e 00:19:25.730 14:02:05 -- nvmf/common.sh@125 -- # return 0 00:19:25.730 14:02:05 -- nvmf/common.sh@478 -- # '[' -n 75088 ']' 00:19:25.730 14:02:05 -- nvmf/common.sh@479 -- # killprocess 75088 00:19:25.730 14:02:05 -- common/autotest_common.sh@936 -- # '[' -z 75088 ']' 00:19:25.730 14:02:05 -- common/autotest_common.sh@940 -- # kill -0 75088 00:19:25.730 14:02:05 -- common/autotest_common.sh@941 -- # uname 00:19:25.730 14:02:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:25.730 14:02:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75088 00:19:25.730 killing process with pid 75088 00:19:25.730 14:02:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:25.730 14:02:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:25.730 14:02:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75088' 00:19:25.730 14:02:05 -- common/autotest_common.sh@955 -- # kill 75088 00:19:25.730 14:02:05 -- common/autotest_common.sh@960 -- # wait 75088 00:19:27.105 14:02:06 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:27.105 14:02:06 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:27.105 14:02:06 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:27.105 14:02:06 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:27.105 14:02:06 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:27.105 14:02:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.105 14:02:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.105 14:02:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.105 14:02:06 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:27.105 00:19:27.105 real 0m7.223s 00:19:27.105 user 0m32.714s 00:19:27.105 sys 0m2.797s 00:19:27.105 14:02:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:27.105 14:02:06 -- common/autotest_common.sh@10 -- # set +x 00:19:27.105 ************************************ 00:19:27.105 END TEST nvmf_bdev_io_wait 00:19:27.105 ************************************ 00:19:27.105 14:02:06 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:27.105 14:02:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:27.105 14:02:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:27.105 14:02:06 -- common/autotest_common.sh@10 -- # set +x 00:19:27.105 ************************************ 00:19:27.105 START TEST nvmf_queue_depth 00:19:27.105 ************************************ 00:19:27.105 14:02:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:19:27.364 * Looking for test storage... 00:19:27.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:27.364 14:02:06 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:27.364 14:02:06 -- nvmf/common.sh@7 -- # uname -s 00:19:27.364 14:02:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.364 14:02:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.364 14:02:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.364 14:02:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.364 14:02:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.364 14:02:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.364 14:02:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.364 14:02:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.364 14:02:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.364 14:02:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.364 14:02:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:19:27.364 14:02:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:19:27.364 14:02:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.364 14:02:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.364 14:02:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:27.364 14:02:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.364 14:02:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:27.364 14:02:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.364 14:02:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.364 14:02:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.364 14:02:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.364 14:02:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.364 14:02:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.364 14:02:06 -- paths/export.sh@5 -- # export PATH 00:19:27.364 14:02:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.364 14:02:06 -- nvmf/common.sh@47 -- # : 0 00:19:27.364 14:02:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:27.364 14:02:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:27.364 14:02:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.364 14:02:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.364 14:02:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.364 14:02:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:27.364 14:02:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:27.364 14:02:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:27.364 14:02:06 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:19:27.364 14:02:06 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:19:27.364 14:02:06 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:27.364 14:02:06 -- target/queue_depth.sh@19 -- # nvmftestinit 00:19:27.364 14:02:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:27.364 14:02:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.364 14:02:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:27.364 14:02:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:27.364 14:02:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:27.364 14:02:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.364 14:02:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.364 14:02:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.364 14:02:06 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:27.364 14:02:06 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:27.364 14:02:06 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:27.364 14:02:06 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:27.364 14:02:06 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:27.364 14:02:06 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:27.364 14:02:06 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:27.364 14:02:06 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:27.364 14:02:06 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:27.364 14:02:06 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:27.364 14:02:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:27.364 14:02:06 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:27.364 14:02:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:27.364 14:02:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:27.364 14:02:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:27.364 14:02:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:27.364 14:02:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:27.364 14:02:06 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:27.364 14:02:06 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:27.364 14:02:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:27.364 Cannot find device "nvmf_tgt_br" 00:19:27.364 14:02:06 -- nvmf/common.sh@155 -- # true 00:19:27.364 14:02:06 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:27.364 Cannot find device "nvmf_tgt_br2" 00:19:27.364 14:02:06 -- nvmf/common.sh@156 -- # true 00:19:27.364 14:02:06 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:27.364 14:02:06 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:27.364 Cannot find device "nvmf_tgt_br" 00:19:27.364 14:02:06 -- nvmf/common.sh@158 -- # true 00:19:27.364 14:02:06 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:27.364 Cannot find device "nvmf_tgt_br2" 00:19:27.364 14:02:06 -- nvmf/common.sh@159 -- # true 00:19:27.364 14:02:06 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:27.364 14:02:07 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:27.364 14:02:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:27.623 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:27.623 14:02:07 -- nvmf/common.sh@162 -- # true 00:19:27.623 14:02:07 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:27.623 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:27.623 14:02:07 -- nvmf/common.sh@163 -- # true 00:19:27.623 14:02:07 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:27.623 14:02:07 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:27.623 14:02:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:27.623 14:02:07 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:27.623 14:02:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:27.623 14:02:07 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:27.623 14:02:07 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:27.623 14:02:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:27.623 14:02:07 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:27.623 14:02:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:27.623 14:02:07 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:27.623 14:02:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:27.623 14:02:07 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:27.623 14:02:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:27.624 14:02:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:27.624 14:02:07 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:27.624 14:02:07 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:27.624 14:02:07 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:27.624 14:02:07 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:27.624 14:02:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:27.624 14:02:07 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:27.624 14:02:07 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:27.624 14:02:07 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:27.624 14:02:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:27.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:27.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:19:27.624 00:19:27.624 --- 10.0.0.2 ping statistics --- 00:19:27.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.624 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:19:27.624 14:02:07 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:27.624 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:27.624 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:19:27.624 00:19:27.624 --- 10.0.0.3 ping statistics --- 00:19:27.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.624 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:27.624 14:02:07 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:27.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:27.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:19:27.624 00:19:27.624 --- 10.0.0.1 ping statistics --- 00:19:27.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:27.624 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:27.624 14:02:07 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:27.624 14:02:07 -- nvmf/common.sh@422 -- # return 0 00:19:27.624 14:02:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:27.624 14:02:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:27.624 14:02:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:27.624 14:02:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:27.624 14:02:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:27.624 14:02:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:27.624 14:02:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:27.883 14:02:07 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:19:27.883 14:02:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:27.883 14:02:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:27.883 14:02:07 -- common/autotest_common.sh@10 -- # set +x 00:19:27.883 14:02:07 -- nvmf/common.sh@470 -- # nvmfpid=75422 00:19:27.883 14:02:07 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:27.883 14:02:07 -- nvmf/common.sh@471 -- # waitforlisten 75422 00:19:27.883 14:02:07 -- common/autotest_common.sh@817 -- # '[' -z 75422 ']' 00:19:27.883 14:02:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.883 14:02:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:27.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.883 14:02:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.883 14:02:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:27.883 14:02:07 -- common/autotest_common.sh@10 -- # set +x 00:19:27.883 [2024-04-26 14:02:07.391767] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:27.883 [2024-04-26 14:02:07.391904] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.143 [2024-04-26 14:02:07.566049] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.143 [2024-04-26 14:02:07.811795] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.143 [2024-04-26 14:02:07.811852] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.143 [2024-04-26 14:02:07.811869] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.143 [2024-04-26 14:02:07.811892] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.143 [2024-04-26 14:02:07.811905] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.143 [2024-04-26 14:02:07.811939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.711 14:02:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:28.711 14:02:08 -- common/autotest_common.sh@850 -- # return 0 00:19:28.711 14:02:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:28.711 14:02:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:28.711 14:02:08 -- common/autotest_common.sh@10 -- # set +x 00:19:28.711 14:02:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.711 14:02:08 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:28.711 14:02:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.711 14:02:08 -- common/autotest_common.sh@10 -- # set +x 00:19:28.711 [2024-04-26 14:02:08.312058] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.711 14:02:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.711 14:02:08 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:28.711 14:02:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.711 14:02:08 -- common/autotest_common.sh@10 -- # set +x 00:19:28.970 Malloc0 00:19:28.970 14:02:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.970 14:02:08 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:28.970 14:02:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.970 14:02:08 -- common/autotest_common.sh@10 -- # set +x 00:19:28.970 14:02:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.970 14:02:08 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:28.970 14:02:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.970 14:02:08 -- common/autotest_common.sh@10 -- # set +x 00:19:28.970 14:02:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.970 14:02:08 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:28.970 14:02:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:28.970 14:02:08 -- common/autotest_common.sh@10 -- # set +x 00:19:28.970 [2024-04-26 14:02:08.451998] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.970 14:02:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:28.970 14:02:08 -- target/queue_depth.sh@30 -- # bdevperf_pid=75472 00:19:28.970 14:02:08 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:28.970 14:02:08 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:28.970 14:02:08 -- target/queue_depth.sh@33 -- # waitforlisten 75472 /var/tmp/bdevperf.sock 00:19:28.970 14:02:08 -- common/autotest_common.sh@817 -- # '[' -z 75472 ']' 00:19:28.970 14:02:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.970 14:02:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:28.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.970 14:02:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.970 14:02:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:28.970 14:02:08 -- common/autotest_common.sh@10 -- # set +x 00:19:28.970 [2024-04-26 14:02:08.548577] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:28.970 [2024-04-26 14:02:08.548694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75472 ] 00:19:29.229 [2024-04-26 14:02:08.720287] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.488 [2024-04-26 14:02:08.968934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.746 14:02:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:29.746 14:02:09 -- common/autotest_common.sh@850 -- # return 0 00:19:29.746 14:02:09 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:29.746 14:02:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:29.746 14:02:09 -- common/autotest_common.sh@10 -- # set +x 00:19:30.004 NVMe0n1 00:19:30.004 14:02:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.004 14:02:09 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:30.004 Running I/O for 10 seconds... 00:19:39.974 00:19:39.974 Latency(us) 00:19:39.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.974 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:39.974 Verification LBA range: start 0x0 length 0x4000 00:19:39.974 NVMe0n1 : 10.07 9301.81 36.34 0.00 0.00 109593.68 21582.14 90960.81 00:19:39.974 =================================================================================================================== 00:19:39.974 Total : 9301.81 36.34 0.00 0.00 109593.68 21582.14 90960.81 00:19:39.974 0 00:19:40.233 14:02:19 -- target/queue_depth.sh@39 -- # killprocess 75472 00:19:40.233 14:02:19 -- common/autotest_common.sh@936 -- # '[' -z 75472 ']' 00:19:40.233 14:02:19 -- common/autotest_common.sh@940 -- # kill -0 75472 00:19:40.233 14:02:19 -- common/autotest_common.sh@941 -- # uname 00:19:40.233 14:02:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:40.233 14:02:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75472 00:19:40.233 killing process with pid 75472 00:19:40.233 Received shutdown signal, test time was about 10.000000 seconds 00:19:40.233 00:19:40.233 Latency(us) 00:19:40.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.233 =================================================================================================================== 00:19:40.233 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.233 14:02:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:40.233 14:02:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:40.233 14:02:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75472' 00:19:40.233 14:02:19 -- common/autotest_common.sh@955 -- # kill 75472 00:19:40.233 14:02:19 -- common/autotest_common.sh@960 -- # wait 75472 00:19:41.611 14:02:20 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:41.611 14:02:20 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:41.611 14:02:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:41.611 14:02:20 -- nvmf/common.sh@117 -- # sync 00:19:41.611 14:02:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:41.611 14:02:21 -- nvmf/common.sh@120 -- # set +e 00:19:41.611 14:02:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:41.611 14:02:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:41.611 rmmod nvme_tcp 00:19:41.611 rmmod nvme_fabrics 00:19:41.611 rmmod nvme_keyring 00:19:41.611 14:02:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:41.611 14:02:21 -- nvmf/common.sh@124 -- # set -e 00:19:41.611 14:02:21 -- nvmf/common.sh@125 -- # return 0 00:19:41.611 14:02:21 -- nvmf/common.sh@478 -- # '[' -n 75422 ']' 00:19:41.611 14:02:21 -- nvmf/common.sh@479 -- # killprocess 75422 00:19:41.611 14:02:21 -- common/autotest_common.sh@936 -- # '[' -z 75422 ']' 00:19:41.611 14:02:21 -- common/autotest_common.sh@940 -- # kill -0 75422 00:19:41.611 14:02:21 -- common/autotest_common.sh@941 -- # uname 00:19:41.611 14:02:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:41.611 14:02:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75422 00:19:41.611 14:02:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:41.611 14:02:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:41.611 killing process with pid 75422 00:19:41.611 14:02:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75422' 00:19:41.611 14:02:21 -- common/autotest_common.sh@955 -- # kill 75422 00:19:41.611 14:02:21 -- common/autotest_common.sh@960 -- # wait 75422 00:19:42.988 14:02:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:42.988 14:02:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:42.988 14:02:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:42.988 14:02:22 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:42.988 14:02:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:42.988 14:02:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.988 14:02:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.988 14:02:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.247 14:02:22 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:43.247 00:19:43.247 real 0m16.023s 00:19:43.247 user 0m26.279s 00:19:43.247 sys 0m2.424s 00:19:43.247 14:02:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:43.247 ************************************ 00:19:43.247 END TEST nvmf_queue_depth 00:19:43.247 ************************************ 00:19:43.247 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:19:43.247 14:02:22 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:43.247 14:02:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:43.247 14:02:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:43.247 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:19:43.247 ************************************ 00:19:43.247 START TEST nvmf_multipath 00:19:43.247 ************************************ 00:19:43.247 14:02:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:19:43.507 * Looking for test storage... 00:19:43.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:43.507 14:02:23 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:43.507 14:02:23 -- nvmf/common.sh@7 -- # uname -s 00:19:43.507 14:02:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.507 14:02:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.507 14:02:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.507 14:02:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.507 14:02:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.507 14:02:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.507 14:02:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.507 14:02:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.507 14:02:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.507 14:02:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.507 14:02:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:19:43.507 14:02:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:19:43.507 14:02:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.507 14:02:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.507 14:02:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:43.507 14:02:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.507 14:02:23 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:43.507 14:02:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.507 14:02:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.507 14:02:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.507 14:02:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.507 14:02:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.507 14:02:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.507 14:02:23 -- paths/export.sh@5 -- # export PATH 00:19:43.507 14:02:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.507 14:02:23 -- nvmf/common.sh@47 -- # : 0 00:19:43.507 14:02:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:43.507 14:02:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:43.507 14:02:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.507 14:02:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.507 14:02:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.507 14:02:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:43.507 14:02:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:43.507 14:02:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:43.507 14:02:23 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:43.507 14:02:23 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:43.507 14:02:23 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:43.507 14:02:23 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:43.507 14:02:23 -- target/multipath.sh@43 -- # nvmftestinit 00:19:43.507 14:02:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:43.507 14:02:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.507 14:02:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:43.507 14:02:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:43.507 14:02:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:43.507 14:02:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.507 14:02:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.507 14:02:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.508 14:02:23 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:43.508 14:02:23 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:43.508 14:02:23 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:43.508 14:02:23 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:43.508 14:02:23 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:43.508 14:02:23 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:43.508 14:02:23 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:43.508 14:02:23 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:43.508 14:02:23 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:43.508 14:02:23 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:43.508 14:02:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:43.508 14:02:23 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:43.508 14:02:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:43.508 14:02:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:43.508 14:02:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:43.508 14:02:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:43.508 14:02:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:43.508 14:02:23 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:43.508 14:02:23 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:43.508 14:02:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:43.508 Cannot find device "nvmf_tgt_br" 00:19:43.508 14:02:23 -- nvmf/common.sh@155 -- # true 00:19:43.508 14:02:23 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:43.508 Cannot find device "nvmf_tgt_br2" 00:19:43.508 14:02:23 -- nvmf/common.sh@156 -- # true 00:19:43.508 14:02:23 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:43.508 14:02:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:43.508 Cannot find device "nvmf_tgt_br" 00:19:43.508 14:02:23 -- nvmf/common.sh@158 -- # true 00:19:43.508 14:02:23 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:43.508 Cannot find device "nvmf_tgt_br2" 00:19:43.508 14:02:23 -- nvmf/common.sh@159 -- # true 00:19:43.508 14:02:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:43.767 14:02:23 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:43.767 14:02:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:43.767 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.767 14:02:23 -- nvmf/common.sh@162 -- # true 00:19:43.767 14:02:23 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:43.767 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.767 14:02:23 -- nvmf/common.sh@163 -- # true 00:19:43.767 14:02:23 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:43.767 14:02:23 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:43.767 14:02:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:43.767 14:02:23 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:43.767 14:02:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:43.767 14:02:23 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:43.767 14:02:23 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:43.767 14:02:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:43.767 14:02:23 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:43.767 14:02:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:43.767 14:02:23 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:43.767 14:02:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:43.767 14:02:23 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:43.767 14:02:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:43.767 14:02:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:43.767 14:02:23 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:43.767 14:02:23 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:43.767 14:02:23 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:43.767 14:02:23 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:43.767 14:02:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:43.767 14:02:23 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:43.767 14:02:23 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:43.767 14:02:23 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:43.767 14:02:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:43.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:19:43.767 00:19:43.767 --- 10.0.0.2 ping statistics --- 00:19:43.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.767 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:19:43.767 14:02:23 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:44.026 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:44.026 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:19:44.026 00:19:44.026 --- 10.0.0.3 ping statistics --- 00:19:44.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.026 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:44.026 14:02:23 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:44.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:19:44.026 00:19:44.026 --- 10.0.0.1 ping statistics --- 00:19:44.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.026 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:44.026 14:02:23 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.026 14:02:23 -- nvmf/common.sh@422 -- # return 0 00:19:44.026 14:02:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:44.026 14:02:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.026 14:02:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:44.026 14:02:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:44.026 14:02:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.026 14:02:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:44.026 14:02:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:44.026 14:02:23 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:19:44.026 14:02:23 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:19:44.026 14:02:23 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:19:44.026 14:02:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:44.026 14:02:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:44.026 14:02:23 -- common/autotest_common.sh@10 -- # set +x 00:19:44.026 14:02:23 -- nvmf/common.sh@470 -- # nvmfpid=75836 00:19:44.026 14:02:23 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:44.026 14:02:23 -- nvmf/common.sh@471 -- # waitforlisten 75836 00:19:44.026 14:02:23 -- common/autotest_common.sh@817 -- # '[' -z 75836 ']' 00:19:44.026 14:02:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.026 14:02:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:44.026 14:02:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.026 14:02:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:44.026 14:02:23 -- common/autotest_common.sh@10 -- # set +x 00:19:44.026 [2024-04-26 14:02:23.588037] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:19:44.026 [2024-04-26 14:02:23.588145] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.285 [2024-04-26 14:02:23.762071] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:44.542 [2024-04-26 14:02:24.005058] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.542 [2024-04-26 14:02:24.005334] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.542 [2024-04-26 14:02:24.005633] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.542 [2024-04-26 14:02:24.005815] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.542 [2024-04-26 14:02:24.005879] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.542 [2024-04-26 14:02:24.006234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.542 [2024-04-26 14:02:24.006338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.542 [2024-04-26 14:02:24.006447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.542 [2024-04-26 14:02:24.006884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:44.801 14:02:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:44.801 14:02:24 -- common/autotest_common.sh@850 -- # return 0 00:19:44.801 14:02:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:44.801 14:02:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:44.801 14:02:24 -- common/autotest_common.sh@10 -- # set +x 00:19:44.801 14:02:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.801 14:02:24 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:45.060 [2024-04-26 14:02:24.665068] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.060 14:02:24 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:45.320 Malloc0 00:19:45.579 14:02:24 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:19:45.579 14:02:25 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:45.838 14:02:25 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:46.097 [2024-04-26 14:02:25.553046] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.097 14:02:25 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:46.097 [2024-04-26 14:02:25.752863] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:46.357 14:02:25 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:19:46.357 14:02:25 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:19:46.616 14:02:26 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:19:46.616 14:02:26 -- common/autotest_common.sh@1184 -- # local i=0 00:19:46.616 14:02:26 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:46.616 14:02:26 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:46.616 14:02:26 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:49.145 14:02:28 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:49.145 14:02:28 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:49.145 14:02:28 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:49.145 14:02:28 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:49.145 14:02:28 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:49.145 14:02:28 -- common/autotest_common.sh@1194 -- # return 0 00:19:49.145 14:02:28 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:19:49.146 14:02:28 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:19:49.146 14:02:28 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:19:49.146 14:02:28 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:19:49.146 14:02:28 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:19:49.146 14:02:28 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:19:49.146 14:02:28 -- target/multipath.sh@38 -- # return 0 00:19:49.146 14:02:28 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:19:49.146 14:02:28 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:19:49.146 14:02:28 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:19:49.146 14:02:28 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:19:49.146 14:02:28 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:19:49.146 14:02:28 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:19:49.146 14:02:28 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:19:49.146 14:02:28 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:19:49.146 14:02:28 -- target/multipath.sh@22 -- # local timeout=20 00:19:49.146 14:02:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:19:49.146 14:02:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:19:49.146 14:02:28 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:19:49.146 14:02:28 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:19:49.146 14:02:28 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:19:49.146 14:02:28 -- target/multipath.sh@22 -- # local timeout=20 00:19:49.146 14:02:28 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:19:49.146 14:02:28 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:49.146 14:02:28 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:19:49.146 14:02:28 -- target/multipath.sh@85 -- # echo numa 00:19:49.146 14:02:28 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:19:49.146 14:02:28 -- target/multipath.sh@88 -- # fio_pid=75973 00:19:49.146 14:02:28 -- target/multipath.sh@90 -- # sleep 1 00:19:49.146 [global] 00:19:49.146 thread=1 00:19:49.146 invalidate=1 00:19:49.146 rw=randrw 00:19:49.146 time_based=1 00:19:49.146 runtime=6 00:19:49.146 ioengine=libaio 00:19:49.146 direct=1 00:19:49.146 bs=4096 00:19:49.146 iodepth=128 00:19:49.146 norandommap=0 00:19:49.146 numjobs=1 00:19:49.146 00:19:49.146 verify_dump=1 00:19:49.146 verify_backlog=512 00:19:49.146 verify_state_save=0 00:19:49.146 do_verify=1 00:19:49.146 verify=crc32c-intel 00:19:49.146 [job0] 00:19:49.146 filename=/dev/nvme0n1 00:19:49.146 Could not set queue depth (nvme0n1) 00:19:49.146 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:49.146 fio-3.35 00:19:49.146 Starting 1 thread 00:19:49.715 14:02:29 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:50.000 14:02:29 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:50.264 14:02:29 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:19:50.264 14:02:29 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:19:50.264 14:02:29 -- target/multipath.sh@22 -- # local timeout=20 00:19:50.264 14:02:29 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:19:50.264 14:02:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:19:50.264 14:02:29 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:19:50.264 14:02:29 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:19:50.264 14:02:29 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:19:50.264 14:02:29 -- target/multipath.sh@22 -- # local timeout=20 00:19:50.264 14:02:29 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:19:50.264 14:02:29 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:50.264 14:02:29 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:19:50.264 14:02:29 -- target/multipath.sh@25 -- # sleep 1s 00:19:51.200 14:02:30 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:19:51.200 14:02:30 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:51.200 14:02:30 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:19:51.200 14:02:30 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:51.458 14:02:30 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:51.458 14:02:31 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:19:51.458 14:02:31 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:19:51.458 14:02:31 -- target/multipath.sh@22 -- # local timeout=20 00:19:51.458 14:02:31 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:19:51.458 14:02:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:19:51.458 14:02:31 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:19:51.458 14:02:31 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:19:51.458 14:02:31 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:19:51.458 14:02:31 -- target/multipath.sh@22 -- # local timeout=20 00:19:51.458 14:02:31 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:19:51.458 14:02:31 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:51.458 14:02:31 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:19:51.458 14:02:31 -- target/multipath.sh@25 -- # sleep 1s 00:19:52.832 14:02:32 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:19:52.832 14:02:32 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:52.832 14:02:32 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:19:52.832 14:02:32 -- target/multipath.sh@104 -- # wait 75973 00:19:55.455 00:19:55.455 job0: (groupid=0, jobs=1): err= 0: pid=75999: Fri Apr 26 14:02:34 2024 00:19:55.455 read: IOPS=11.1k, BW=43.3MiB/s (45.4MB/s)(260MiB/6005msec) 00:19:55.455 slat (usec): min=2, max=5901, avg=47.31, stdev=192.37 00:19:55.455 clat (usec): min=510, max=20482, avg=7849.98, stdev=1464.36 00:19:55.455 lat (usec): min=536, max=20493, avg=7897.28, stdev=1469.29 00:19:55.455 clat percentiles (usec): 00:19:55.455 | 1.00th=[ 4228], 5.00th=[ 5735], 10.00th=[ 6259], 20.00th=[ 6849], 00:19:55.455 | 30.00th=[ 7242], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 8094], 00:19:55.455 | 70.00th=[ 8356], 80.00th=[ 8717], 90.00th=[ 9372], 95.00th=[10421], 00:19:55.455 | 99.00th=[11994], 99.50th=[12518], 99.90th=[17957], 99.95th=[19792], 00:19:55.455 | 99.99th=[20055] 00:19:55.455 bw ( KiB/s): min=10480, max=28000, per=52.41%, avg=23250.18, stdev=5143.32, samples=11 00:19:55.455 iops : min= 2620, max= 7000, avg=5812.55, stdev=1285.83, samples=11 00:19:55.455 write: IOPS=6519, BW=25.5MiB/s (26.7MB/s)(139MiB/5451msec); 0 zone resets 00:19:55.455 slat (usec): min=13, max=2280, avg=64.40, stdev=121.29 00:19:55.455 clat (usec): min=325, max=20043, avg=6714.24, stdev=1300.71 00:19:55.455 lat (usec): min=391, max=20073, avg=6778.63, stdev=1304.58 00:19:55.455 clat percentiles (usec): 00:19:55.455 | 1.00th=[ 2769], 5.00th=[ 4686], 10.00th=[ 5276], 20.00th=[ 5932], 00:19:55.455 | 30.00th=[ 6325], 40.00th=[ 6521], 50.00th=[ 6783], 60.00th=[ 6980], 00:19:55.455 | 70.00th=[ 7177], 80.00th=[ 7439], 90.00th=[ 7898], 95.00th=[ 8455], 00:19:55.455 | 99.00th=[10683], 99.50th=[11469], 99.90th=[12780], 99.95th=[17171], 00:19:55.455 | 99.99th=[19268] 00:19:55.455 bw ( KiB/s): min=10824, max=27560, per=89.14%, avg=23245.18, stdev=4903.18, samples=11 00:19:55.455 iops : min= 2706, max= 6890, avg=5811.27, stdev=1225.78, samples=11 00:19:55.455 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.04% 00:19:55.455 lat (msec) : 2=0.23%, 4=1.03%, 10=93.79%, 20=4.86%, 50=0.02% 00:19:55.455 cpu : usr=7.64%, sys=32.13%, ctx=7582, majf=0, minf=84 00:19:55.455 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:19:55.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:55.455 issued rwts: total=66592,35538,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.455 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:55.455 00:19:55.455 Run status group 0 (all jobs): 00:19:55.455 READ: bw=43.3MiB/s (45.4MB/s), 43.3MiB/s-43.3MiB/s (45.4MB/s-45.4MB/s), io=260MiB (273MB), run=6005-6005msec 00:19:55.455 WRITE: bw=25.5MiB/s (26.7MB/s), 25.5MiB/s-25.5MiB/s (26.7MB/s-26.7MB/s), io=139MiB (146MB), run=5451-5451msec 00:19:55.455 00:19:55.455 Disk stats (read/write): 00:19:55.455 nvme0n1: ios=65847/34590, merge=0/0, ticks=461943/205579, in_queue=667522, util=98.58% 00:19:55.455 14:02:34 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:55.455 14:02:34 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:55.455 14:02:34 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:19:55.455 14:02:34 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:19:55.455 14:02:34 -- target/multipath.sh@22 -- # local timeout=20 00:19:55.455 14:02:34 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:19:55.455 14:02:34 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:19:55.455 14:02:34 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:19:55.455 14:02:34 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:19:55.455 14:02:34 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:19:55.455 14:02:34 -- target/multipath.sh@22 -- # local timeout=20 00:19:55.455 14:02:34 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:19:55.455 14:02:34 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:55.455 14:02:34 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:19:55.455 14:02:34 -- target/multipath.sh@25 -- # sleep 1s 00:19:56.422 14:02:35 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:19:56.422 14:02:35 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:56.423 14:02:35 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:19:56.423 14:02:35 -- target/multipath.sh@113 -- # echo round-robin 00:19:56.423 14:02:35 -- target/multipath.sh@116 -- # fio_pid=76130 00:19:56.423 14:02:35 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:19:56.423 14:02:36 -- target/multipath.sh@118 -- # sleep 1 00:19:56.423 [global] 00:19:56.423 thread=1 00:19:56.423 invalidate=1 00:19:56.423 rw=randrw 00:19:56.423 time_based=1 00:19:56.423 runtime=6 00:19:56.423 ioengine=libaio 00:19:56.423 direct=1 00:19:56.423 bs=4096 00:19:56.423 iodepth=128 00:19:56.423 norandommap=0 00:19:56.423 numjobs=1 00:19:56.423 00:19:56.423 verify_dump=1 00:19:56.423 verify_backlog=512 00:19:56.423 verify_state_save=0 00:19:56.423 do_verify=1 00:19:56.423 verify=crc32c-intel 00:19:56.423 [job0] 00:19:56.423 filename=/dev/nvme0n1 00:19:56.423 Could not set queue depth (nvme0n1) 00:19:56.680 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:56.680 fio-3.35 00:19:56.680 Starting 1 thread 00:19:57.614 14:02:37 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:57.614 14:02:37 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:57.873 14:02:37 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:19:57.873 14:02:37 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:19:57.873 14:02:37 -- target/multipath.sh@22 -- # local timeout=20 00:19:57.873 14:02:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:19:57.873 14:02:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:19:57.873 14:02:37 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:19:57.873 14:02:37 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:19:57.873 14:02:37 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:19:57.873 14:02:37 -- target/multipath.sh@22 -- # local timeout=20 00:19:57.873 14:02:37 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:19:57.873 14:02:37 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:57.873 14:02:37 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:19:57.873 14:02:37 -- target/multipath.sh@25 -- # sleep 1s 00:19:59.247 14:02:38 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:19:59.247 14:02:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:59.247 14:02:38 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:19:59.247 14:02:38 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:59.247 14:02:38 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:59.506 14:02:38 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:19:59.506 14:02:38 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:19:59.506 14:02:38 -- target/multipath.sh@22 -- # local timeout=20 00:19:59.506 14:02:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:19:59.506 14:02:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:19:59.506 14:02:38 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:19:59.506 14:02:38 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:19:59.506 14:02:38 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:19:59.506 14:02:38 -- target/multipath.sh@22 -- # local timeout=20 00:19:59.506 14:02:38 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:19:59.506 14:02:38 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:19:59.506 14:02:38 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:19:59.506 14:02:38 -- target/multipath.sh@25 -- # sleep 1s 00:20:00.466 14:02:39 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:20:00.466 14:02:39 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:20:00.466 14:02:39 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:20:00.466 14:02:39 -- target/multipath.sh@132 -- # wait 76130 00:20:02.999 00:20:02.999 job0: (groupid=0, jobs=1): err= 0: pid=76151: Fri Apr 26 14:02:42 2024 00:20:02.999 read: IOPS=12.9k, BW=50.3MiB/s (52.8MB/s)(302MiB/6003msec) 00:20:02.999 slat (usec): min=4, max=4213, avg=38.57, stdev=161.41 00:20:02.999 clat (usec): min=595, max=50285, avg=6953.79, stdev=1531.44 00:20:02.999 lat (usec): min=623, max=50298, avg=6992.36, stdev=1540.42 00:20:02.999 clat percentiles (usec): 00:20:02.999 | 1.00th=[ 3752], 5.00th=[ 4555], 10.00th=[ 5080], 20.00th=[ 5800], 00:20:02.999 | 30.00th=[ 6325], 40.00th=[ 6718], 50.00th=[ 7046], 60.00th=[ 7373], 00:20:02.999 | 70.00th=[ 7635], 80.00th=[ 7963], 90.00th=[ 8455], 95.00th=[ 9110], 00:20:02.999 | 99.00th=[10814], 99.50th=[11207], 99.90th=[12518], 99.95th=[13304], 00:20:02.999 | 99.99th=[50070] 00:20:02.999 bw ( KiB/s): min= 7456, max=41848, per=51.91%, avg=26748.36, stdev=10754.62, samples=11 00:20:02.999 iops : min= 1864, max=10462, avg=6687.09, stdev=2688.65, samples=11 00:20:02.999 write: IOPS=7719, BW=30.2MiB/s (31.6MB/s)(151MiB/5009msec); 0 zone resets 00:20:02.999 slat (usec): min=6, max=2531, avg=53.09, stdev=102.26 00:20:02.999 clat (usec): min=379, max=12613, avg=5776.89, stdev=1387.43 00:20:02.999 lat (usec): min=420, max=12636, avg=5829.99, stdev=1398.19 00:20:02.999 clat percentiles (usec): 00:20:02.999 | 1.00th=[ 2704], 5.00th=[ 3490], 10.00th=[ 3884], 20.00th=[ 4424], 00:20:02.999 | 30.00th=[ 5014], 40.00th=[ 5669], 50.00th=[ 6063], 60.00th=[ 6325], 00:20:02.999 | 70.00th=[ 6587], 80.00th=[ 6849], 90.00th=[ 7177], 95.00th=[ 7635], 00:20:02.999 | 99.00th=[ 9503], 99.50th=[ 9896], 99.90th=[11338], 99.95th=[11469], 00:20:02.999 | 99.99th=[12125] 00:20:02.999 bw ( KiB/s): min= 8080, max=41048, per=86.71%, avg=26777.45, stdev=10540.86, samples=11 00:20:02.999 iops : min= 2020, max=10262, avg=6694.36, stdev=2635.21, samples=11 00:20:02.999 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:20:02.999 lat (msec) : 2=0.17%, 4=4.91%, 10=93.10%, 20=1.76%, 50=0.01% 00:20:02.999 lat (msec) : 100=0.01% 00:20:02.999 cpu : usr=7.56%, sys=31.92%, ctx=8829, majf=0, minf=133 00:20:02.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:20:02.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:02.999 issued rwts: total=77335,38669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:02.999 00:20:02.999 Run status group 0 (all jobs): 00:20:02.999 READ: bw=50.3MiB/s (52.8MB/s), 50.3MiB/s-50.3MiB/s (52.8MB/s-52.8MB/s), io=302MiB (317MB), run=6003-6003msec 00:20:02.999 WRITE: bw=30.2MiB/s (31.6MB/s), 30.2MiB/s-30.2MiB/s (31.6MB/s-31.6MB/s), io=151MiB (158MB), run=5009-5009msec 00:20:02.999 00:20:02.999 Disk stats (read/write): 00:20:02.999 nvme0n1: ios=75785/38669, merge=0/0, ticks=473370/196305, in_queue=669675, util=98.68% 00:20:02.999 14:02:42 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:02.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:02.999 14:02:42 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:02.999 14:02:42 -- common/autotest_common.sh@1205 -- # local i=0 00:20:02.999 14:02:42 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:02.999 14:02:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:02.999 14:02:42 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:02.999 14:02:42 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:02.999 14:02:42 -- common/autotest_common.sh@1217 -- # return 0 00:20:02.999 14:02:42 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:02.999 14:02:42 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:20:02.999 14:02:42 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:20:02.999 14:02:42 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:20:02.999 14:02:42 -- target/multipath.sh@144 -- # nvmftestfini 00:20:02.999 14:02:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:02.999 14:02:42 -- nvmf/common.sh@117 -- # sync 00:20:03.258 14:02:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:03.258 14:02:42 -- nvmf/common.sh@120 -- # set +e 00:20:03.258 14:02:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:03.258 14:02:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:03.258 rmmod nvme_tcp 00:20:03.258 rmmod nvme_fabrics 00:20:03.258 rmmod nvme_keyring 00:20:03.258 14:02:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:03.258 14:02:42 -- nvmf/common.sh@124 -- # set -e 00:20:03.258 14:02:42 -- nvmf/common.sh@125 -- # return 0 00:20:03.258 14:02:42 -- nvmf/common.sh@478 -- # '[' -n 75836 ']' 00:20:03.258 14:02:42 -- nvmf/common.sh@479 -- # killprocess 75836 00:20:03.258 14:02:42 -- common/autotest_common.sh@936 -- # '[' -z 75836 ']' 00:20:03.258 14:02:42 -- common/autotest_common.sh@940 -- # kill -0 75836 00:20:03.258 14:02:42 -- common/autotest_common.sh@941 -- # uname 00:20:03.258 14:02:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:03.259 14:02:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75836 00:20:03.259 killing process with pid 75836 00:20:03.259 14:02:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:03.259 14:02:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:03.259 14:02:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75836' 00:20:03.259 14:02:42 -- common/autotest_common.sh@955 -- # kill 75836 00:20:03.259 14:02:42 -- common/autotest_common.sh@960 -- # wait 75836 00:20:05.170 14:02:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:05.170 14:02:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:05.170 14:02:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:05.170 14:02:44 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:05.170 14:02:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:05.170 14:02:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.170 14:02:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.170 14:02:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.170 14:02:44 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:05.170 ************************************ 00:20:05.170 END TEST nvmf_multipath 00:20:05.170 ************************************ 00:20:05.170 00:20:05.170 real 0m21.521s 00:20:05.170 user 1m20.503s 00:20:05.170 sys 0m8.292s 00:20:05.170 14:02:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:05.170 14:02:44 -- common/autotest_common.sh@10 -- # set +x 00:20:05.170 14:02:44 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:20:05.170 14:02:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:05.170 14:02:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:05.170 14:02:44 -- common/autotest_common.sh@10 -- # set +x 00:20:05.170 ************************************ 00:20:05.170 START TEST nvmf_zcopy 00:20:05.170 ************************************ 00:20:05.170 14:02:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:20:05.170 * Looking for test storage... 00:20:05.170 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:05.170 14:02:44 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:05.170 14:02:44 -- nvmf/common.sh@7 -- # uname -s 00:20:05.170 14:02:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.170 14:02:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.170 14:02:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.170 14:02:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.170 14:02:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.170 14:02:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.170 14:02:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.170 14:02:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.170 14:02:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.170 14:02:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.170 14:02:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:20:05.170 14:02:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:20:05.170 14:02:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.170 14:02:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.170 14:02:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:05.170 14:02:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.170 14:02:44 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:05.170 14:02:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.170 14:02:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.170 14:02:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.171 14:02:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.171 14:02:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.171 14:02:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.171 14:02:44 -- paths/export.sh@5 -- # export PATH 00:20:05.171 14:02:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.171 14:02:44 -- nvmf/common.sh@47 -- # : 0 00:20:05.171 14:02:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:05.171 14:02:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:05.171 14:02:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.171 14:02:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.171 14:02:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.171 14:02:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:05.171 14:02:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:05.171 14:02:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:05.171 14:02:44 -- target/zcopy.sh@12 -- # nvmftestinit 00:20:05.171 14:02:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:05.171 14:02:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.171 14:02:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:05.171 14:02:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:05.171 14:02:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:05.171 14:02:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.171 14:02:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.171 14:02:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.171 14:02:44 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:05.171 14:02:44 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:05.171 14:02:44 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:05.171 14:02:44 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:05.171 14:02:44 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:05.171 14:02:44 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:05.171 14:02:44 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:05.171 14:02:44 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:05.171 14:02:44 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:05.171 14:02:44 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:05.171 14:02:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:05.171 14:02:44 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:05.171 14:02:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:05.171 14:02:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:05.171 14:02:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:05.171 14:02:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:05.171 14:02:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:05.171 14:02:44 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:05.171 14:02:44 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:05.171 14:02:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:05.171 Cannot find device "nvmf_tgt_br" 00:20:05.171 14:02:44 -- nvmf/common.sh@155 -- # true 00:20:05.171 14:02:44 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:05.171 Cannot find device "nvmf_tgt_br2" 00:20:05.171 14:02:44 -- nvmf/common.sh@156 -- # true 00:20:05.171 14:02:44 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:05.171 14:02:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:05.171 Cannot find device "nvmf_tgt_br" 00:20:05.171 14:02:44 -- nvmf/common.sh@158 -- # true 00:20:05.171 14:02:44 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:05.430 Cannot find device "nvmf_tgt_br2" 00:20:05.430 14:02:44 -- nvmf/common.sh@159 -- # true 00:20:05.430 14:02:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:05.430 14:02:44 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:05.430 14:02:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:05.430 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:05.430 14:02:44 -- nvmf/common.sh@162 -- # true 00:20:05.430 14:02:44 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:05.430 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:05.430 14:02:44 -- nvmf/common.sh@163 -- # true 00:20:05.430 14:02:44 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:05.430 14:02:44 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:05.430 14:02:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:05.430 14:02:44 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:05.430 14:02:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:05.430 14:02:45 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:05.430 14:02:45 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:05.430 14:02:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:05.430 14:02:45 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:05.430 14:02:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:05.430 14:02:45 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:05.430 14:02:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:05.430 14:02:45 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:05.430 14:02:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:05.430 14:02:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:05.430 14:02:45 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:05.689 14:02:45 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:05.689 14:02:45 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:05.689 14:02:45 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:05.689 14:02:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:05.689 14:02:45 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:05.689 14:02:45 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:05.689 14:02:45 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:05.689 14:02:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:05.689 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:05.689 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:20:05.689 00:20:05.689 --- 10.0.0.2 ping statistics --- 00:20:05.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.689 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:20:05.689 14:02:45 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:05.689 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:05.689 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.108 ms 00:20:05.689 00:20:05.689 --- 10.0.0.3 ping statistics --- 00:20:05.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.689 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:20:05.689 14:02:45 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:05.689 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:05.689 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:20:05.689 00:20:05.689 --- 10.0.0.1 ping statistics --- 00:20:05.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.689 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:20:05.689 14:02:45 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:05.689 14:02:45 -- nvmf/common.sh@422 -- # return 0 00:20:05.689 14:02:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:05.689 14:02:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.690 14:02:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:05.690 14:02:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:05.690 14:02:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.690 14:02:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:05.690 14:02:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:05.690 14:02:45 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:20:05.690 14:02:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:05.690 14:02:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:05.690 14:02:45 -- common/autotest_common.sh@10 -- # set +x 00:20:05.690 14:02:45 -- nvmf/common.sh@470 -- # nvmfpid=76451 00:20:05.690 14:02:45 -- nvmf/common.sh@471 -- # waitforlisten 76451 00:20:05.690 14:02:45 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:05.690 14:02:45 -- common/autotest_common.sh@817 -- # '[' -z 76451 ']' 00:20:05.690 14:02:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.690 14:02:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:05.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.690 14:02:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.690 14:02:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:05.690 14:02:45 -- common/autotest_common.sh@10 -- # set +x 00:20:05.690 [2024-04-26 14:02:45.351572] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:05.690 [2024-04-26 14:02:45.351693] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.949 [2024-04-26 14:02:45.529802] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.209 [2024-04-26 14:02:45.776286] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.209 [2024-04-26 14:02:45.776343] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.209 [2024-04-26 14:02:45.776359] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.209 [2024-04-26 14:02:45.776382] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.209 [2024-04-26 14:02:45.776396] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.209 [2024-04-26 14:02:45.776429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.777 14:02:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:06.777 14:02:46 -- common/autotest_common.sh@850 -- # return 0 00:20:06.777 14:02:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:06.777 14:02:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:06.777 14:02:46 -- common/autotest_common.sh@10 -- # set +x 00:20:06.777 14:02:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.777 14:02:46 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:20:06.777 14:02:46 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:20:06.777 14:02:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.777 14:02:46 -- common/autotest_common.sh@10 -- # set +x 00:20:06.777 [2024-04-26 14:02:46.269461] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.777 14:02:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.777 14:02:46 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:06.777 14:02:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.777 14:02:46 -- common/autotest_common.sh@10 -- # set +x 00:20:06.777 14:02:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.777 14:02:46 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:06.777 14:02:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.777 14:02:46 -- common/autotest_common.sh@10 -- # set +x 00:20:06.777 [2024-04-26 14:02:46.293609] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.777 14:02:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.777 14:02:46 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:06.777 14:02:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.777 14:02:46 -- common/autotest_common.sh@10 -- # set +x 00:20:06.777 14:02:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.777 14:02:46 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:20:06.777 14:02:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.777 14:02:46 -- common/autotest_common.sh@10 -- # set +x 00:20:06.777 malloc0 00:20:06.777 14:02:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.777 14:02:46 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:06.777 14:02:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.777 14:02:46 -- common/autotest_common.sh@10 -- # set +x 00:20:06.777 14:02:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.777 14:02:46 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:20:06.777 14:02:46 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:20:06.777 14:02:46 -- nvmf/common.sh@521 -- # config=() 00:20:06.777 14:02:46 -- nvmf/common.sh@521 -- # local subsystem config 00:20:06.777 14:02:46 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:06.777 14:02:46 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:06.777 { 00:20:06.777 "params": { 00:20:06.777 "name": "Nvme$subsystem", 00:20:06.777 "trtype": "$TEST_TRANSPORT", 00:20:06.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:06.777 "adrfam": "ipv4", 00:20:06.777 "trsvcid": "$NVMF_PORT", 00:20:06.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:06.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:06.777 "hdgst": ${hdgst:-false}, 00:20:06.777 "ddgst": ${ddgst:-false} 00:20:06.777 }, 00:20:06.777 "method": "bdev_nvme_attach_controller" 00:20:06.777 } 00:20:06.777 EOF 00:20:06.777 )") 00:20:06.777 14:02:46 -- nvmf/common.sh@543 -- # cat 00:20:06.777 14:02:46 -- nvmf/common.sh@545 -- # jq . 00:20:06.777 14:02:46 -- nvmf/common.sh@546 -- # IFS=, 00:20:06.777 14:02:46 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:06.777 "params": { 00:20:06.777 "name": "Nvme1", 00:20:06.777 "trtype": "tcp", 00:20:06.777 "traddr": "10.0.0.2", 00:20:06.777 "adrfam": "ipv4", 00:20:06.777 "trsvcid": "4420", 00:20:06.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.777 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:06.777 "hdgst": false, 00:20:06.777 "ddgst": false 00:20:06.777 }, 00:20:06.777 "method": "bdev_nvme_attach_controller" 00:20:06.777 }' 00:20:07.036 [2024-04-26 14:02:46.468077] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:07.036 [2024-04-26 14:02:46.468207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76502 ] 00:20:07.036 [2024-04-26 14:02:46.638214] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.295 [2024-04-26 14:02:46.899294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.862 Running I/O for 10 seconds... 00:20:17.834 00:20:17.834 Latency(us) 00:20:17.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.834 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:20:17.834 Verification LBA range: start 0x0 length 0x1000 00:20:17.834 Nvme1n1 : 10.01 6416.33 50.13 0.00 0.00 19895.02 552.71 25372.17 00:20:17.834 =================================================================================================================== 00:20:17.834 Total : 6416.33 50.13 0.00 0.00 19895.02 552.71 25372.17 00:20:19.212 14:02:58 -- target/zcopy.sh@39 -- # perfpid=76636 00:20:19.212 14:02:58 -- target/zcopy.sh@41 -- # xtrace_disable 00:20:19.212 14:02:58 -- common/autotest_common.sh@10 -- # set +x 00:20:19.212 14:02:58 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:20:19.212 14:02:58 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:20:19.212 14:02:58 -- nvmf/common.sh@521 -- # config=() 00:20:19.212 14:02:58 -- nvmf/common.sh@521 -- # local subsystem config 00:20:19.212 14:02:58 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:19.212 14:02:58 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:19.212 { 00:20:19.212 "params": { 00:20:19.212 "name": "Nvme$subsystem", 00:20:19.212 "trtype": "$TEST_TRANSPORT", 00:20:19.212 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.212 "adrfam": "ipv4", 00:20:19.212 "trsvcid": "$NVMF_PORT", 00:20:19.212 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.212 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.212 "hdgst": ${hdgst:-false}, 00:20:19.212 "ddgst": ${ddgst:-false} 00:20:19.212 }, 00:20:19.212 "method": "bdev_nvme_attach_controller" 00:20:19.212 } 00:20:19.212 EOF 00:20:19.212 )") 00:20:19.212 [2024-04-26 14:02:58.634117] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.212 [2024-04-26 14:02:58.634182] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.212 14:02:58 -- nvmf/common.sh@543 -- # cat 00:20:19.212 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.212 14:02:58 -- nvmf/common.sh@545 -- # jq . 00:20:19.212 14:02:58 -- nvmf/common.sh@546 -- # IFS=, 00:20:19.212 14:02:58 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:19.212 "params": { 00:20:19.212 "name": "Nvme1", 00:20:19.212 "trtype": "tcp", 00:20:19.212 "traddr": "10.0.0.2", 00:20:19.212 "adrfam": "ipv4", 00:20:19.212 "trsvcid": "4420", 00:20:19.212 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.212 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.212 "hdgst": false, 00:20:19.212 "ddgst": false 00:20:19.212 }, 00:20:19.212 "method": "bdev_nvme_attach_controller" 00:20:19.212 }' 00:20:19.212 [2024-04-26 14:02:58.650054] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.212 [2024-04-26 14:02:58.650097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.212 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.212 [2024-04-26 14:02:58.662026] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.212 [2024-04-26 14:02:58.662078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.213 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.213 [2024-04-26 14:02:58.673998] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.213 [2024-04-26 14:02:58.674043] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.213 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.213 [2024-04-26 14:02:58.685996] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.213 [2024-04-26 14:02:58.686039] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.213 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.213 [2024-04-26 14:02:58.697971] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.213 [2024-04-26 14:02:58.698008] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.213 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.213 [2024-04-26 14:02:58.709934] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.213 [2024-04-26 14:02:58.709974] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.213 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.213 [2024-04-26 14:02:58.721935] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.213 [2024-04-26 14:02:58.721974] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.213 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.213 [2024-04-26 14:02:58.727933] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:19.213 [2024-04-26 14:02:58.728056] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76636 ] 00:20:19.213 [2024-04-26 14:02:58.733909] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.213 [2024-04-26 14:02:58.733947] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.213 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.213 [2024-04-26 14:02:58.745887] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.213 [2024-04-26 14:02:58.745922] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.213 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.213 [2024-04-26 14:02:58.757882] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.213 [2024-04-26 14:02:58.757927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.213 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.213 [2024-04-26 14:02:58.769849] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.213 [2024-04-26 14:02:58.769885] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.213 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.213 [2024-04-26 14:02:58.781875] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.213 [2024-04-26 14:02:58.781919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.213 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.213 [2024-04-26 14:02:58.793831] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.213 [2024-04-26 14:02:58.793868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.213 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.213 [2024-04-26 14:02:58.805777] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.213 [2024-04-26 14:02:58.805811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.213 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.213 [2024-04-26 14:02:58.817781] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.213 [2024-04-26 14:02:58.817816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.213 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.213 [2024-04-26 14:02:58.829786] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.213 [2024-04-26 14:02:58.829824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.213 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.213 [2024-04-26 14:02:58.841742] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.213 [2024-04-26 14:02:58.841778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.213 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.213 [2024-04-26 14:02:58.853756] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.213 [2024-04-26 14:02:58.853792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.213 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.213 [2024-04-26 14:02:58.869703] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.213 [2024-04-26 14:02:58.869738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.213 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.213 [2024-04-26 14:02:58.881721] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.213 [2024-04-26 14:02:58.881759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.474 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.474 [2024-04-26 14:02:58.893719] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.474 [2024-04-26 14:02:58.893755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.474 [2024-04-26 14:02:58.897391] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.474 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.474 [2024-04-26 14:02:58.909662] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.474 [2024-04-26 14:02:58.909700] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.474 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.474 [2024-04-26 14:02:58.921690] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.474 [2024-04-26 14:02:58.921726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.474 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.474 [2024-04-26 14:02:58.933666] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.474 [2024-04-26 14:02:58.933707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.474 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.474 [2024-04-26 14:02:58.945621] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.474 [2024-04-26 14:02:58.945660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.474 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.474 [2024-04-26 14:02:58.957616] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.474 [2024-04-26 14:02:58.957652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.474 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.474 [2024-04-26 14:02:58.969630] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.474 [2024-04-26 14:02:58.969664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.475 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.475 [2024-04-26 14:02:58.985626] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.475 [2024-04-26 14:02:58.985663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.475 2024/04/26 14:02:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.475 [2024-04-26 14:02:59.001625] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.475 [2024-04-26 14:02:59.001665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.475 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.475 [2024-04-26 14:02:59.013616] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.475 [2024-04-26 14:02:59.013661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.475 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.475 [2024-04-26 14:02:59.025646] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.475 [2024-04-26 14:02:59.025691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.475 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.475 [2024-04-26 14:02:59.037662] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.475 [2024-04-26 14:02:59.037707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.475 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.475 [2024-04-26 14:02:59.049631] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.475 [2024-04-26 14:02:59.049672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.475 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.475 [2024-04-26 14:02:59.061626] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.475 [2024-04-26 14:02:59.061665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.475 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.475 [2024-04-26 14:02:59.073623] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.475 [2024-04-26 14:02:59.073659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.475 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.475 [2024-04-26 14:02:59.085622] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.475 [2024-04-26 14:02:59.085657] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.475 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.475 [2024-04-26 14:02:59.097618] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.475 [2024-04-26 14:02:59.097655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.475 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.475 [2024-04-26 14:02:59.109629] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.475 [2024-04-26 14:02:59.109668] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.475 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.475 [2024-04-26 14:02:59.121621] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.475 [2024-04-26 14:02:59.121664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.475 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.475 [2024-04-26 14:02:59.133626] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.475 [2024-04-26 14:02:59.133669] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.475 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.475 [2024-04-26 14:02:59.143297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.475 [2024-04-26 14:02:59.145613] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.475 [2024-04-26 14:02:59.145650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.157653] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.157691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.169607] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.169646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.181613] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.181649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.193638] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.193675] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.205590] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.205626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.217631] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.217668] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.229592] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.229627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.245618] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.245670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.261626] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.261666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.273588] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.273620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.285592] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.285621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.297598] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.297631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.313597] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.313631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.325595] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.325630] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.337618] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.337651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.349616] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.349650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.361628] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.361670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.377655] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.377693] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.389618] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.389652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:19.739 [2024-04-26 14:02:59.405635] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:19.739 [2024-04-26 14:02:59.405670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:19.739 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.005 [2024-04-26 14:02:59.417637] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.005 [2024-04-26 14:02:59.417680] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.005 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.005 [2024-04-26 14:02:59.429643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.005 [2024-04-26 14:02:59.429687] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.005 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.005 [2024-04-26 14:02:59.441621] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.005 [2024-04-26 14:02:59.441659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.006 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.006 [2024-04-26 14:02:59.453623] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.006 [2024-04-26 14:02:59.453660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.006 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.006 [2024-04-26 14:02:59.465620] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.006 [2024-04-26 14:02:59.465657] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.006 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.006 [2024-04-26 14:02:59.477623] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.006 [2024-04-26 14:02:59.477668] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.006 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.006 [2024-04-26 14:02:59.489622] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.006 [2024-04-26 14:02:59.489661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.006 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.006 [2024-04-26 14:02:59.501609] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.006 [2024-04-26 14:02:59.501645] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.006 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.006 [2024-04-26 14:02:59.513578] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.006 [2024-04-26 14:02:59.513612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.006 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.006 [2024-04-26 14:02:59.525594] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.006 [2024-04-26 14:02:59.525629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.006 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.006 [2024-04-26 14:02:59.537597] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.006 [2024-04-26 14:02:59.537634] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.006 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.006 [2024-04-26 14:02:59.549604] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.006 [2024-04-26 14:02:59.549642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.006 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.006 [2024-04-26 14:02:59.561681] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.006 [2024-04-26 14:02:59.561719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.006 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.006 [2024-04-26 14:02:59.573619] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.006 [2024-04-26 14:02:59.573656] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.006 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.006 [2024-04-26 14:02:59.585655] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.006 [2024-04-26 14:02:59.585692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.006 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.006 [2024-04-26 14:02:59.597678] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.006 [2024-04-26 14:02:59.597725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.006 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.006 [2024-04-26 14:02:59.609657] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.006 [2024-04-26 14:02:59.609702] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.006 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.006 Running I/O for 5 seconds... 00:20:20.006 [2024-04-26 14:02:59.628973] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.006 [2024-04-26 14:02:59.629025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.006 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.006 [2024-04-26 14:02:59.645836] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.006 [2024-04-26 14:02:59.645882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.006 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.006 [2024-04-26 14:02:59.663094] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.006 [2024-04-26 14:02:59.663137] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.006 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.275 [2024-04-26 14:02:59.679852] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.275 [2024-04-26 14:02:59.679909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.275 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.275 [2024-04-26 14:02:59.695940] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.275 [2024-04-26 14:02:59.695995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.275 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.275 [2024-04-26 14:02:59.713176] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.275 [2024-04-26 14:02:59.713222] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.275 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.275 [2024-04-26 14:02:59.728909] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.275 [2024-04-26 14:02:59.728953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.275 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.275 [2024-04-26 14:02:59.746543] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.275 [2024-04-26 14:02:59.746590] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.275 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.275 [2024-04-26 14:02:59.763765] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.275 [2024-04-26 14:02:59.763809] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.275 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.275 [2024-04-26 14:02:59.780071] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.275 [2024-04-26 14:02:59.780114] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.275 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.275 [2024-04-26 14:02:59.801486] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.275 [2024-04-26 14:02:59.801539] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.275 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.275 [2024-04-26 14:02:59.817401] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.275 [2024-04-26 14:02:59.817456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.275 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.275 [2024-04-26 14:02:59.834469] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.275 [2024-04-26 14:02:59.834513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.275 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.275 [2024-04-26 14:02:59.850634] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.275 [2024-04-26 14:02:59.850692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.275 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.275 [2024-04-26 14:02:59.867128] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.276 [2024-04-26 14:02:59.867184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.276 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.276 [2024-04-26 14:02:59.883298] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.276 [2024-04-26 14:02:59.883343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.276 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.276 [2024-04-26 14:02:59.900646] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.276 [2024-04-26 14:02:59.900697] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.276 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.276 [2024-04-26 14:02:59.917586] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.276 [2024-04-26 14:02:59.917636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.276 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.276 [2024-04-26 14:02:59.933759] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.276 [2024-04-26 14:02:59.933808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.276 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.547 [2024-04-26 14:02:59.950556] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.547 [2024-04-26 14:02:59.950600] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.547 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.547 [2024-04-26 14:02:59.966403] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.547 [2024-04-26 14:02:59.966466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.547 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.547 [2024-04-26 14:02:59.983731] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.547 [2024-04-26 14:02:59.983777] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.547 2024/04/26 14:02:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.547 [2024-04-26 14:03:00.000024] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.547 [2024-04-26 14:03:00.000084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.547 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.547 [2024-04-26 14:03:00.022096] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.547 [2024-04-26 14:03:00.022164] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.547 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.547 [2024-04-26 14:03:00.037367] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.547 [2024-04-26 14:03:00.037416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.547 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.547 [2024-04-26 14:03:00.054500] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.547 [2024-04-26 14:03:00.054549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.547 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.547 [2024-04-26 14:03:00.070662] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.547 [2024-04-26 14:03:00.070708] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.547 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.547 [2024-04-26 14:03:00.088341] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.547 [2024-04-26 14:03:00.088401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.547 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.547 [2024-04-26 14:03:00.103278] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.547 [2024-04-26 14:03:00.103332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.547 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.547 [2024-04-26 14:03:00.119257] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.547 [2024-04-26 14:03:00.119304] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.547 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.547 [2024-04-26 14:03:00.136249] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.547 [2024-04-26 14:03:00.136302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.547 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.547 [2024-04-26 14:03:00.152860] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.547 [2024-04-26 14:03:00.152909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.547 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.547 [2024-04-26 14:03:00.168380] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.548 [2024-04-26 14:03:00.168443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.548 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.548 [2024-04-26 14:03:00.179114] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.548 [2024-04-26 14:03:00.179184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.548 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.548 [2024-04-26 14:03:00.193066] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.548 [2024-04-26 14:03:00.193110] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.548 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.548 [2024-04-26 14:03:00.209713] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.548 [2024-04-26 14:03:00.209761] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.548 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.821 [2024-04-26 14:03:00.224893] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.821 [2024-04-26 14:03:00.224941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.821 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.821 [2024-04-26 14:03:00.241424] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.821 [2024-04-26 14:03:00.241523] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.821 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.821 [2024-04-26 14:03:00.257760] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.821 [2024-04-26 14:03:00.257813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.821 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.821 [2024-04-26 14:03:00.274487] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.821 [2024-04-26 14:03:00.274549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.821 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.821 [2024-04-26 14:03:00.291494] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.821 [2024-04-26 14:03:00.291540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.821 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.821 [2024-04-26 14:03:00.308284] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.821 [2024-04-26 14:03:00.308339] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.821 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.821 [2024-04-26 14:03:00.323996] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.821 [2024-04-26 14:03:00.324047] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.821 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.821 [2024-04-26 14:03:00.341251] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.821 [2024-04-26 14:03:00.341294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.821 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.821 [2024-04-26 14:03:00.356118] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.821 [2024-04-26 14:03:00.356171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.821 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.821 [2024-04-26 14:03:00.371724] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.821 [2024-04-26 14:03:00.371765] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.821 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.821 [2024-04-26 14:03:00.390271] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.821 [2024-04-26 14:03:00.390315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.821 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.821 [2024-04-26 14:03:00.405578] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.821 [2024-04-26 14:03:00.405619] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.821 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.821 [2024-04-26 14:03:00.424460] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.821 [2024-04-26 14:03:00.424500] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.821 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.821 [2024-04-26 14:03:00.439311] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.821 [2024-04-26 14:03:00.439356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.821 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.821 [2024-04-26 14:03:00.455797] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.821 [2024-04-26 14:03:00.455855] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.821 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.821 [2024-04-26 14:03:00.473330] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.821 [2024-04-26 14:03:00.473381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.821 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:20.821 [2024-04-26 14:03:00.489625] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:20.821 [2024-04-26 14:03:00.489670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.821 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.097 [2024-04-26 14:03:00.506859] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.097 [2024-04-26 14:03:00.506905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.097 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.097 [2024-04-26 14:03:00.522785] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.097 [2024-04-26 14:03:00.522829] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.097 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.097 [2024-04-26 14:03:00.539126] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.097 [2024-04-26 14:03:00.539195] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.097 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.097 [2024-04-26 14:03:00.557615] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.097 [2024-04-26 14:03:00.557668] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.097 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.097 [2024-04-26 14:03:00.571766] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.097 [2024-04-26 14:03:00.571811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.097 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.097 [2024-04-26 14:03:00.587869] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.097 [2024-04-26 14:03:00.587914] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.097 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.097 [2024-04-26 14:03:00.607446] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.097 [2024-04-26 14:03:00.607506] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.097 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.097 [2024-04-26 14:03:00.621356] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.097 [2024-04-26 14:03:00.621407] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.097 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.097 [2024-04-26 14:03:00.636331] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.097 [2024-04-26 14:03:00.636385] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.097 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.097 [2024-04-26 14:03:00.652366] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.097 [2024-04-26 14:03:00.652419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.098 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.098 [2024-04-26 14:03:00.669718] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.098 [2024-04-26 14:03:00.669790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.098 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.098 [2024-04-26 14:03:00.686987] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.098 [2024-04-26 14:03:00.687044] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.098 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.098 [2024-04-26 14:03:00.702022] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.098 [2024-04-26 14:03:00.702079] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.098 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.098 [2024-04-26 14:03:00.713717] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.098 [2024-04-26 14:03:00.713771] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.098 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.098 [2024-04-26 14:03:00.731073] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.098 [2024-04-26 14:03:00.731166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.098 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.098 [2024-04-26 14:03:00.746070] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.098 [2024-04-26 14:03:00.746128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.098 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.098 [2024-04-26 14:03:00.762413] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.098 [2024-04-26 14:03:00.762472] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.098 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.376 [2024-04-26 14:03:00.778820] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.376 [2024-04-26 14:03:00.778875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.376 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.376 [2024-04-26 14:03:00.794797] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.376 [2024-04-26 14:03:00.794849] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.376 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.376 [2024-04-26 14:03:00.805040] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.376 [2024-04-26 14:03:00.805091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.376 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.376 [2024-04-26 14:03:00.819841] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.376 [2024-04-26 14:03:00.819898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.376 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.376 [2024-04-26 14:03:00.829716] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.376 [2024-04-26 14:03:00.829768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.376 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.376 [2024-04-26 14:03:00.843844] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.376 [2024-04-26 14:03:00.843895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.376 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.376 [2024-04-26 14:03:00.859447] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.376 [2024-04-26 14:03:00.859505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.376 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.376 [2024-04-26 14:03:00.876819] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.376 [2024-04-26 14:03:00.876880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.376 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.376 [2024-04-26 14:03:00.893977] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.376 [2024-04-26 14:03:00.894030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.376 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.376 [2024-04-26 14:03:00.908643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.376 [2024-04-26 14:03:00.908692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.376 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.376 [2024-04-26 14:03:00.924614] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.376 [2024-04-26 14:03:00.924666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.376 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.376 [2024-04-26 14:03:00.941940] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.376 [2024-04-26 14:03:00.941992] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.376 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.376 [2024-04-26 14:03:00.957374] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.376 [2024-04-26 14:03:00.957421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.376 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.376 [2024-04-26 14:03:00.974647] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.376 [2024-04-26 14:03:00.974711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.376 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.376 [2024-04-26 14:03:00.991223] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.376 [2024-04-26 14:03:00.991274] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.376 2024/04/26 14:03:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.376 [2024-04-26 14:03:01.008127] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.376 [2024-04-26 14:03:01.008194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.376 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.376 [2024-04-26 14:03:01.024260] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.376 [2024-04-26 14:03:01.024322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.376 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.376 [2024-04-26 14:03:01.041873] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.376 [2024-04-26 14:03:01.041926] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.376 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.640 [2024-04-26 14:03:01.056759] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.641 [2024-04-26 14:03:01.056825] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.641 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.641 [2024-04-26 14:03:01.072065] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.641 [2024-04-26 14:03:01.072114] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.641 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.641 [2024-04-26 14:03:01.089060] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.641 [2024-04-26 14:03:01.089116] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.641 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.641 [2024-04-26 14:03:01.104258] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.641 [2024-04-26 14:03:01.104312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.641 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.641 [2024-04-26 14:03:01.120971] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.641 [2024-04-26 14:03:01.121025] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.641 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.641 [2024-04-26 14:03:01.136674] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.641 [2024-04-26 14:03:01.136735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.641 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.641 [2024-04-26 14:03:01.154205] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.641 [2024-04-26 14:03:01.154261] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.641 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.641 [2024-04-26 14:03:01.168699] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.641 [2024-04-26 14:03:01.168752] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.641 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.641 [2024-04-26 14:03:01.186286] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.641 [2024-04-26 14:03:01.186344] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.641 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.641 [2024-04-26 14:03:01.201403] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.641 [2024-04-26 14:03:01.201464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.641 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.641 [2024-04-26 14:03:01.211495] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.641 [2024-04-26 14:03:01.211566] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.641 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.641 [2024-04-26 14:03:01.226815] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.641 [2024-04-26 14:03:01.226870] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.641 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.641 [2024-04-26 14:03:01.244626] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.641 [2024-04-26 14:03:01.244684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.641 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.641 [2024-04-26 14:03:01.260102] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.641 [2024-04-26 14:03:01.260168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.641 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.641 [2024-04-26 14:03:01.277135] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.641 [2024-04-26 14:03:01.277203] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.641 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.641 [2024-04-26 14:03:01.293288] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.641 [2024-04-26 14:03:01.293338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.641 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.641 [2024-04-26 14:03:01.302915] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.641 [2024-04-26 14:03:01.302965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.641 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.901 [2024-04-26 14:03:01.318146] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.901 [2024-04-26 14:03:01.318209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.901 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.901 [2024-04-26 14:03:01.336256] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.901 [2024-04-26 14:03:01.336300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.901 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.901 [2024-04-26 14:03:01.351668] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.901 [2024-04-26 14:03:01.351719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.901 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.901 [2024-04-26 14:03:01.364176] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.901 [2024-04-26 14:03:01.364226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.901 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.901 [2024-04-26 14:03:01.381691] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.901 [2024-04-26 14:03:01.381741] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.901 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.901 [2024-04-26 14:03:01.396771] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.901 [2024-04-26 14:03:01.396822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.901 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.901 [2024-04-26 14:03:01.407326] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.901 [2024-04-26 14:03:01.407385] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.901 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.901 [2024-04-26 14:03:01.422221] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.901 [2024-04-26 14:03:01.422270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.901 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.901 [2024-04-26 14:03:01.438900] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.901 [2024-04-26 14:03:01.438956] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.901 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.901 [2024-04-26 14:03:01.455234] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.901 [2024-04-26 14:03:01.455282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.901 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.901 [2024-04-26 14:03:01.473042] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.901 [2024-04-26 14:03:01.473093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.901 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.901 [2024-04-26 14:03:01.487476] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.901 [2024-04-26 14:03:01.487521] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.901 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.901 [2024-04-26 14:03:01.502809] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.901 [2024-04-26 14:03:01.502854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.901 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.901 [2024-04-26 14:03:01.514861] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.901 [2024-04-26 14:03:01.514922] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.901 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.901 [2024-04-26 14:03:01.532834] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.901 [2024-04-26 14:03:01.532878] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.901 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.901 [2024-04-26 14:03:01.547388] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.901 [2024-04-26 14:03:01.547432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.901 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:21.901 [2024-04-26 14:03:01.562974] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:21.901 [2024-04-26 14:03:01.563023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:21.901 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.162 [2024-04-26 14:03:01.575490] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.162 [2024-04-26 14:03:01.575535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.162 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.162 [2024-04-26 14:03:01.585768] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.162 [2024-04-26 14:03:01.585818] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.162 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.162 [2024-04-26 14:03:01.600679] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.162 [2024-04-26 14:03:01.600732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.162 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.162 [2024-04-26 14:03:01.616058] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.162 [2024-04-26 14:03:01.616109] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.162 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.162 [2024-04-26 14:03:01.632356] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.162 [2024-04-26 14:03:01.632399] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.162 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.162 [2024-04-26 14:03:01.649180] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.162 [2024-04-26 14:03:01.649237] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.162 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.162 [2024-04-26 14:03:01.666040] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.162 [2024-04-26 14:03:01.666090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.162 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.162 [2024-04-26 14:03:01.681650] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.162 [2024-04-26 14:03:01.681698] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.162 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.162 [2024-04-26 14:03:01.698797] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.162 [2024-04-26 14:03:01.698848] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.162 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.162 [2024-04-26 14:03:01.716508] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.162 [2024-04-26 14:03:01.716574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.162 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.162 [2024-04-26 14:03:01.730900] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.162 [2024-04-26 14:03:01.730948] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.162 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.162 [2024-04-26 14:03:01.747677] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.162 [2024-04-26 14:03:01.747725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.162 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.162 [2024-04-26 14:03:01.763725] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.162 [2024-04-26 14:03:01.763774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.162 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.162 [2024-04-26 14:03:01.779967] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.162 [2024-04-26 14:03:01.780019] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.162 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.162 [2024-04-26 14:03:01.798445] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.162 [2024-04-26 14:03:01.798495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.162 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.162 [2024-04-26 14:03:01.813304] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.162 [2024-04-26 14:03:01.813351] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.162 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.162 [2024-04-26 14:03:01.822270] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.162 [2024-04-26 14:03:01.822315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.162 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.421 [2024-04-26 14:03:01.837832] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.421 [2024-04-26 14:03:01.837892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.421 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.421 [2024-04-26 14:03:01.855158] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.421 [2024-04-26 14:03:01.855244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.421 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.421 [2024-04-26 14:03:01.871984] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.421 [2024-04-26 14:03:01.872034] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.421 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.421 [2024-04-26 14:03:01.888663] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.421 [2024-04-26 14:03:01.888712] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.421 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.421 [2024-04-26 14:03:01.905644] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.421 [2024-04-26 14:03:01.905691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.421 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.421 [2024-04-26 14:03:01.921552] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.421 [2024-04-26 14:03:01.921614] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.421 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.421 [2024-04-26 14:03:01.938967] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.421 [2024-04-26 14:03:01.939027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.421 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.421 [2024-04-26 14:03:01.955868] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.421 [2024-04-26 14:03:01.955917] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.421 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.421 [2024-04-26 14:03:01.972987] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.421 [2024-04-26 14:03:01.973053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.421 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.421 [2024-04-26 14:03:01.989134] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.421 [2024-04-26 14:03:01.989203] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.421 2024/04/26 14:03:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.421 [2024-04-26 14:03:02.006620] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.421 [2024-04-26 14:03:02.006682] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.421 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.421 [2024-04-26 14:03:02.021376] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.421 [2024-04-26 14:03:02.021422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.421 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.421 [2024-04-26 14:03:02.036435] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.421 [2024-04-26 14:03:02.036479] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.421 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.421 [2024-04-26 14:03:02.051279] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.421 [2024-04-26 14:03:02.051338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.422 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.422 [2024-04-26 14:03:02.067474] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.422 [2024-04-26 14:03:02.067521] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.422 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.422 [2024-04-26 14:03:02.084927] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.422 [2024-04-26 14:03:02.084977] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.422 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.681 [2024-04-26 14:03:02.100266] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.681 [2024-04-26 14:03:02.100325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.681 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.681 [2024-04-26 14:03:02.116830] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.681 [2024-04-26 14:03:02.116884] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.681 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.681 [2024-04-26 14:03:02.134693] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.681 [2024-04-26 14:03:02.134758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.681 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.681 [2024-04-26 14:03:02.149273] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.681 [2024-04-26 14:03:02.149320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.681 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.681 [2024-04-26 14:03:02.166489] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.681 [2024-04-26 14:03:02.166549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.681 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.681 [2024-04-26 14:03:02.180923] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.681 [2024-04-26 14:03:02.180967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.681 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.681 [2024-04-26 14:03:02.197760] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.681 [2024-04-26 14:03:02.197804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.681 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.681 [2024-04-26 14:03:02.214191] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.681 [2024-04-26 14:03:02.214238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.681 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.681 [2024-04-26 14:03:02.230616] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.681 [2024-04-26 14:03:02.230662] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.681 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.681 [2024-04-26 14:03:02.246788] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.681 [2024-04-26 14:03:02.246834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.681 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.681 [2024-04-26 14:03:02.263961] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.681 [2024-04-26 14:03:02.264022] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.681 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.681 [2024-04-26 14:03:02.280009] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.681 [2024-04-26 14:03:02.280056] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.681 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.681 [2024-04-26 14:03:02.296766] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.681 [2024-04-26 14:03:02.296807] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.681 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.681 [2024-04-26 14:03:02.313503] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.681 [2024-04-26 14:03:02.313545] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.681 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.681 [2024-04-26 14:03:02.330447] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.681 [2024-04-26 14:03:02.330496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.681 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.681 [2024-04-26 14:03:02.348348] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.681 [2024-04-26 14:03:02.348397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.681 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.940 [2024-04-26 14:03:02.363559] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.940 [2024-04-26 14:03:02.363608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.940 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.940 [2024-04-26 14:03:02.380420] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.940 [2024-04-26 14:03:02.380473] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.940 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.940 [2024-04-26 14:03:02.397267] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.940 [2024-04-26 14:03:02.397310] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.940 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.940 [2024-04-26 14:03:02.412615] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.940 [2024-04-26 14:03:02.412664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.940 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.940 [2024-04-26 14:03:02.430575] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.940 [2024-04-26 14:03:02.430619] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.940 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.940 [2024-04-26 14:03:02.444904] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.940 [2024-04-26 14:03:02.444946] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.940 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.940 [2024-04-26 14:03:02.460539] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.941 [2024-04-26 14:03:02.460584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.941 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.941 [2024-04-26 14:03:02.478179] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.941 [2024-04-26 14:03:02.478223] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.941 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.941 [2024-04-26 14:03:02.493297] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.941 [2024-04-26 14:03:02.493338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.941 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.941 [2024-04-26 14:03:02.513036] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.941 [2024-04-26 14:03:02.513102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.941 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.941 [2024-04-26 14:03:02.530828] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.941 [2024-04-26 14:03:02.530887] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.941 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.941 [2024-04-26 14:03:02.547008] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.941 [2024-04-26 14:03:02.547054] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.941 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.941 [2024-04-26 14:03:02.564519] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.941 [2024-04-26 14:03:02.564565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.941 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.941 [2024-04-26 14:03:02.580008] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.941 [2024-04-26 14:03:02.580073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.941 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:22.941 [2024-04-26 14:03:02.597661] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:22.941 [2024-04-26 14:03:02.597716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:22.941 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.201 [2024-04-26 14:03:02.616778] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.201 [2024-04-26 14:03:02.616827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.201 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.201 [2024-04-26 14:03:02.628968] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.201 [2024-04-26 14:03:02.629010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.201 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.201 [2024-04-26 14:03:02.646036] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.201 [2024-04-26 14:03:02.646081] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.201 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.201 [2024-04-26 14:03:02.661765] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.201 [2024-04-26 14:03:02.661806] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.201 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.201 [2024-04-26 14:03:02.679440] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.201 [2024-04-26 14:03:02.679484] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.201 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.201 [2024-04-26 14:03:02.694533] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.201 [2024-04-26 14:03:02.694579] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.201 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.201 [2024-04-26 14:03:02.705085] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.201 [2024-04-26 14:03:02.705128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.201 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.201 [2024-04-26 14:03:02.719250] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.201 [2024-04-26 14:03:02.719292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.201 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.201 [2024-04-26 14:03:02.733928] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.201 [2024-04-26 14:03:02.733983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.201 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.201 [2024-04-26 14:03:02.750435] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.201 [2024-04-26 14:03:02.750485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.201 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.201 [2024-04-26 14:03:02.766418] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.201 [2024-04-26 14:03:02.766476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.201 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.201 [2024-04-26 14:03:02.778379] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.201 [2024-04-26 14:03:02.778422] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.201 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.201 [2024-04-26 14:03:02.796617] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.201 [2024-04-26 14:03:02.796659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.201 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.201 [2024-04-26 14:03:02.810674] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.201 [2024-04-26 14:03:02.810716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.201 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.201 [2024-04-26 14:03:02.826735] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.201 [2024-04-26 14:03:02.826778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.201 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.201 [2024-04-26 14:03:02.842856] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.201 [2024-04-26 14:03:02.842899] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.201 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.201 [2024-04-26 14:03:02.860877] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.201 [2024-04-26 14:03:02.860920] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.201 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.461 [2024-04-26 14:03:02.876014] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.461 [2024-04-26 14:03:02.876056] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.461 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.461 [2024-04-26 14:03:02.894691] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.461 [2024-04-26 14:03:02.894733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.461 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.461 [2024-04-26 14:03:02.908933] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.461 [2024-04-26 14:03:02.908974] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.461 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.461 [2024-04-26 14:03:02.924301] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.461 [2024-04-26 14:03:02.924340] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.461 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.461 [2024-04-26 14:03:02.941216] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.461 [2024-04-26 14:03:02.941256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.461 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.461 [2024-04-26 14:03:02.958279] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.461 [2024-04-26 14:03:02.958322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.461 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.461 [2024-04-26 14:03:02.973870] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.461 [2024-04-26 14:03:02.973913] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.461 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.461 [2024-04-26 14:03:02.991341] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.461 [2024-04-26 14:03:02.991381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.461 2024/04/26 14:03:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.461 [2024-04-26 14:03:03.008541] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.461 [2024-04-26 14:03:03.008592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.461 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.461 [2024-04-26 14:03:03.028606] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.461 [2024-04-26 14:03:03.028655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.461 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.461 [2024-04-26 14:03:03.044742] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.461 [2024-04-26 14:03:03.044785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.461 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.461 [2024-04-26 14:03:03.062120] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.461 [2024-04-26 14:03:03.062177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.461 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.461 [2024-04-26 14:03:03.077246] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.461 [2024-04-26 14:03:03.077285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.461 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.461 [2024-04-26 14:03:03.093946] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.461 [2024-04-26 14:03:03.093993] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.461 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.461 [2024-04-26 14:03:03.108658] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.461 [2024-04-26 14:03:03.108719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.461 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.461 [2024-04-26 14:03:03.125027] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.461 [2024-04-26 14:03:03.125074] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.461 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.726 [2024-04-26 14:03:03.141702] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.726 [2024-04-26 14:03:03.141757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.726 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.726 [2024-04-26 14:03:03.163225] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.726 [2024-04-26 14:03:03.163270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.726 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.726 [2024-04-26 14:03:03.180744] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.726 [2024-04-26 14:03:03.180789] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.726 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.726 [2024-04-26 14:03:03.197628] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.726 [2024-04-26 14:03:03.197670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.726 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.726 [2024-04-26 14:03:03.213789] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.726 [2024-04-26 14:03:03.213837] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.726 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.726 [2024-04-26 14:03:03.230715] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.726 [2024-04-26 14:03:03.230759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.726 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.726 [2024-04-26 14:03:03.246524] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.726 [2024-04-26 14:03:03.246579] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.726 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.726 [2024-04-26 14:03:03.263783] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.726 [2024-04-26 14:03:03.263829] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.726 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.726 [2024-04-26 14:03:03.280060] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.726 [2024-04-26 14:03:03.280105] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.726 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.726 [2024-04-26 14:03:03.297194] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.726 [2024-04-26 14:03:03.297238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.727 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.727 [2024-04-26 14:03:03.313519] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.727 [2024-04-26 14:03:03.313579] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.727 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.727 [2024-04-26 14:03:03.333481] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.727 [2024-04-26 14:03:03.333534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.727 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.727 [2024-04-26 14:03:03.351711] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.727 [2024-04-26 14:03:03.351758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.727 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.727 [2024-04-26 14:03:03.367974] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.727 [2024-04-26 14:03:03.368014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.727 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.727 [2024-04-26 14:03:03.385122] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.727 [2024-04-26 14:03:03.385174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.727 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.991 [2024-04-26 14:03:03.401572] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.991 [2024-04-26 14:03:03.401621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.991 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.991 [2024-04-26 14:03:03.422969] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.991 [2024-04-26 14:03:03.423015] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.991 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.991 [2024-04-26 14:03:03.438038] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.991 [2024-04-26 14:03:03.438086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.991 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.991 [2024-04-26 14:03:03.454897] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.991 [2024-04-26 14:03:03.454941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.991 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.991 [2024-04-26 14:03:03.471923] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.991 [2024-04-26 14:03:03.471969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.991 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.991 [2024-04-26 14:03:03.489221] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.991 [2024-04-26 14:03:03.489269] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.991 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.991 [2024-04-26 14:03:03.510682] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.991 [2024-04-26 14:03:03.510730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.991 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.991 [2024-04-26 14:03:03.525388] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.991 [2024-04-26 14:03:03.525432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.991 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.991 [2024-04-26 14:03:03.539586] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.991 [2024-04-26 14:03:03.539629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.992 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.992 [2024-04-26 14:03:03.555931] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.992 [2024-04-26 14:03:03.555978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.992 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.992 [2024-04-26 14:03:03.572471] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.992 [2024-04-26 14:03:03.572513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.992 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.992 [2024-04-26 14:03:03.588967] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.992 [2024-04-26 14:03:03.589010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.992 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.992 [2024-04-26 14:03:03.605712] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.992 [2024-04-26 14:03:03.605756] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.992 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.992 [2024-04-26 14:03:03.626155] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.992 [2024-04-26 14:03:03.626217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.992 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.992 [2024-04-26 14:03:03.640920] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.992 [2024-04-26 14:03:03.640968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.992 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:23.992 [2024-04-26 14:03:03.656069] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:23.992 [2024-04-26 14:03:03.656115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:23.992 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.251 [2024-04-26 14:03:03.676600] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.251 [2024-04-26 14:03:03.676647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.251 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.251 [2024-04-26 14:03:03.693025] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.251 [2024-04-26 14:03:03.693076] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.251 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.251 [2024-04-26 14:03:03.709776] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.251 [2024-04-26 14:03:03.709823] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.251 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.251 [2024-04-26 14:03:03.727905] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.251 [2024-04-26 14:03:03.727948] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.251 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.251 [2024-04-26 14:03:03.742161] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.251 [2024-04-26 14:03:03.742218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.251 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.251 [2024-04-26 14:03:03.758726] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.251 [2024-04-26 14:03:03.758768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.251 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.251 [2024-04-26 14:03:03.774753] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.251 [2024-04-26 14:03:03.774801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.251 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.251 [2024-04-26 14:03:03.793255] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.251 [2024-04-26 14:03:03.793297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.251 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.251 [2024-04-26 14:03:03.807490] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.251 [2024-04-26 14:03:03.807532] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.251 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.251 [2024-04-26 14:03:03.827906] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.251 [2024-04-26 14:03:03.827950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.251 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.251 [2024-04-26 14:03:03.848171] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.251 [2024-04-26 14:03:03.848221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.251 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.251 [2024-04-26 14:03:03.869169] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.251 [2024-04-26 14:03:03.869214] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.251 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.251 [2024-04-26 14:03:03.885799] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.251 [2024-04-26 14:03:03.885843] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.251 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.251 [2024-04-26 14:03:03.902095] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.251 [2024-04-26 14:03:03.902140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.251 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.251 [2024-04-26 14:03:03.918791] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.251 [2024-04-26 14:03:03.918836] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.251 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.534 [2024-04-26 14:03:03.936592] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.534 [2024-04-26 14:03:03.936635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.534 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.534 [2024-04-26 14:03:03.957258] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.534 [2024-04-26 14:03:03.957308] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.534 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.534 [2024-04-26 14:03:03.977888] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.534 [2024-04-26 14:03:03.977936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.534 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.534 [2024-04-26 14:03:03.992113] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.534 [2024-04-26 14:03:03.992164] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.534 2024/04/26 14:03:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.534 [2024-04-26 14:03:04.007029] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.534 [2024-04-26 14:03:04.007071] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.535 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.535 [2024-04-26 14:03:04.026255] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.535 [2024-04-26 14:03:04.026298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.535 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.535 [2024-04-26 14:03:04.038136] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.535 [2024-04-26 14:03:04.038191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.535 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.535 [2024-04-26 14:03:04.058643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.535 [2024-04-26 14:03:04.058690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.535 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.535 [2024-04-26 14:03:04.074427] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.535 [2024-04-26 14:03:04.074479] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.535 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.535 [2024-04-26 14:03:04.095558] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.535 [2024-04-26 14:03:04.095607] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.535 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.535 [2024-04-26 14:03:04.115832] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.535 [2024-04-26 14:03:04.115878] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.535 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.535 [2024-04-26 14:03:04.132519] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.535 [2024-04-26 14:03:04.132564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.535 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.535 [2024-04-26 14:03:04.150087] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.535 [2024-04-26 14:03:04.150134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.535 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.535 [2024-04-26 14:03:04.165284] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.535 [2024-04-26 14:03:04.165339] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.535 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.535 [2024-04-26 14:03:04.182099] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.535 [2024-04-26 14:03:04.182166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.535 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.535 [2024-04-26 14:03:04.203540] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.535 [2024-04-26 14:03:04.203586] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.535 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.794 [2024-04-26 14:03:04.218475] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.794 [2024-04-26 14:03:04.218519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.795 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.795 [2024-04-26 14:03:04.234826] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.795 [2024-04-26 14:03:04.234871] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.795 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.795 [2024-04-26 14:03:04.251086] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.795 [2024-04-26 14:03:04.251130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.795 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.795 [2024-04-26 14:03:04.267697] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.795 [2024-04-26 14:03:04.267739] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.795 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.795 [2024-04-26 14:03:04.284087] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.795 [2024-04-26 14:03:04.284130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.795 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.795 [2024-04-26 14:03:04.300755] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.795 [2024-04-26 14:03:04.300801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.795 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.795 [2024-04-26 14:03:04.321515] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.795 [2024-04-26 14:03:04.321560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.795 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.795 [2024-04-26 14:03:04.340856] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.795 [2024-04-26 14:03:04.340915] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.795 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.795 [2024-04-26 14:03:04.361992] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.795 [2024-04-26 14:03:04.362041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.795 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.795 [2024-04-26 14:03:04.378025] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.795 [2024-04-26 14:03:04.378072] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.795 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.795 [2024-04-26 14:03:04.394285] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.795 [2024-04-26 14:03:04.394329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.795 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.795 [2024-04-26 14:03:04.414374] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.795 [2024-04-26 14:03:04.414431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.795 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.795 [2024-04-26 14:03:04.434964] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.795 [2024-04-26 14:03:04.435008] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.795 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:24.795 [2024-04-26 14:03:04.451661] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:24.795 [2024-04-26 14:03:04.451707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:24.795 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.055 [2024-04-26 14:03:04.469218] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.055 [2024-04-26 14:03:04.469262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.055 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.055 [2024-04-26 14:03:04.484753] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.055 [2024-04-26 14:03:04.484812] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.055 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.055 [2024-04-26 14:03:04.504109] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.055 [2024-04-26 14:03:04.504173] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.055 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.055 [2024-04-26 14:03:04.518113] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.055 [2024-04-26 14:03:04.518175] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.055 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.055 [2024-04-26 14:03:04.537714] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.055 [2024-04-26 14:03:04.537762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.055 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.055 [2024-04-26 14:03:04.555477] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.055 [2024-04-26 14:03:04.555520] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.055 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.055 [2024-04-26 14:03:04.569673] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.055 [2024-04-26 14:03:04.569714] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.055 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.055 [2024-04-26 14:03:04.590200] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.055 [2024-04-26 14:03:04.590265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.055 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.055 [2024-04-26 14:03:04.610846] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.055 [2024-04-26 14:03:04.610896] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.055 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.055 00:20:25.055 Latency(us) 00:20:25.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.055 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:20:25.055 Nvme1n1 : 5.01 12239.46 95.62 0.00 0.00 10446.84 4553.30 17897.38 00:20:25.055 =================================================================================================================== 00:20:25.055 Total : 12239.46 95.62 0.00 0.00 10446.84 4553.30 17897.38 00:20:25.055 [2024-04-26 14:03:04.625998] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.055 [2024-04-26 14:03:04.626040] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.055 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.055 [2024-04-26 14:03:04.641939] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.055 [2024-04-26 14:03:04.641981] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.055 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.055 [2024-04-26 14:03:04.653900] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.055 [2024-04-26 14:03:04.653939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.055 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.055 [2024-04-26 14:03:04.665955] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.055 [2024-04-26 14:03:04.666023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.055 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.055 [2024-04-26 14:03:04.677884] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.055 [2024-04-26 14:03:04.677923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.055 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.055 [2024-04-26 14:03:04.689874] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.055 [2024-04-26 14:03:04.689910] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.055 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.055 [2024-04-26 14:03:04.701846] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.055 [2024-04-26 14:03:04.701880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.055 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.055 [2024-04-26 14:03:04.713808] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.055 [2024-04-26 14:03:04.713843] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.055 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.055 [2024-04-26 14:03:04.725814] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.055 [2024-04-26 14:03:04.725849] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.315 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.315 [2024-04-26 14:03:04.737817] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.315 [2024-04-26 14:03:04.737857] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.315 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.315 [2024-04-26 14:03:04.749785] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.315 [2024-04-26 14:03:04.749821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.315 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.315 [2024-04-26 14:03:04.761790] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.315 [2024-04-26 14:03:04.761828] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.315 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.315 [2024-04-26 14:03:04.777763] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.315 [2024-04-26 14:03:04.777804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.315 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.315 [2024-04-26 14:03:04.789767] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.315 [2024-04-26 14:03:04.789810] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.315 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.315 [2024-04-26 14:03:04.801790] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.315 [2024-04-26 14:03:04.801837] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.315 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.315 [2024-04-26 14:03:04.813709] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.315 [2024-04-26 14:03:04.813747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.315 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.315 [2024-04-26 14:03:04.829725] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.315 [2024-04-26 14:03:04.829762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.315 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.315 [2024-04-26 14:03:04.841688] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.315 [2024-04-26 14:03:04.841725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.315 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.315 [2024-04-26 14:03:04.853643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.315 [2024-04-26 14:03:04.853679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.315 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.316 [2024-04-26 14:03:04.865656] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.316 [2024-04-26 14:03:04.865691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.316 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.316 [2024-04-26 14:03:04.877643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.316 [2024-04-26 14:03:04.877679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.316 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.316 [2024-04-26 14:03:04.889656] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.316 [2024-04-26 14:03:04.889691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.316 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.316 [2024-04-26 14:03:04.901657] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.316 [2024-04-26 14:03:04.901693] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.316 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.316 [2024-04-26 14:03:04.913661] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.316 [2024-04-26 14:03:04.913705] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.316 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.316 [2024-04-26 14:03:04.929672] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.316 [2024-04-26 14:03:04.929713] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.316 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.316 [2024-04-26 14:03:04.945682] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.316 [2024-04-26 14:03:04.945726] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.316 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.316 [2024-04-26 14:03:04.961688] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.316 [2024-04-26 14:03:04.961747] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.316 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.316 [2024-04-26 14:03:04.973692] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.316 [2024-04-26 14:03:04.973741] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.316 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.575 [2024-04-26 14:03:04.989669] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.575 [2024-04-26 14:03:04.989716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.575 2024/04/26 14:03:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.575 [2024-04-26 14:03:05.005660] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.575 [2024-04-26 14:03:05.005698] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.575 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.575 [2024-04-26 14:03:05.017661] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.575 [2024-04-26 14:03:05.017695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.575 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.575 [2024-04-26 14:03:05.033620] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.575 [2024-04-26 14:03:05.033655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.575 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.575 [2024-04-26 14:03:05.045650] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.575 [2024-04-26 14:03:05.045683] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.575 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.575 [2024-04-26 14:03:05.057662] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.575 [2024-04-26 14:03:05.057699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.575 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.575 [2024-04-26 14:03:05.069649] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.575 [2024-04-26 14:03:05.069683] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.575 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.575 [2024-04-26 14:03:05.081635] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.576 [2024-04-26 14:03:05.081671] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.576 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.576 [2024-04-26 14:03:05.097635] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.576 [2024-04-26 14:03:05.097664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.576 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.576 [2024-04-26 14:03:05.109655] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.576 [2024-04-26 14:03:05.109690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.576 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.576 [2024-04-26 14:03:05.121643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.576 [2024-04-26 14:03:05.121678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.576 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.576 [2024-04-26 14:03:05.137644] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.576 [2024-04-26 14:03:05.137690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.576 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.576 [2024-04-26 14:03:05.149659] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.576 [2024-04-26 14:03:05.149697] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.576 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.576 [2024-04-26 14:03:05.161611] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.576 [2024-04-26 14:03:05.161647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.576 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.576 [2024-04-26 14:03:05.173586] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.576 [2024-04-26 14:03:05.173622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.576 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.576 [2024-04-26 14:03:05.185618] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.576 [2024-04-26 14:03:05.185652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.576 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.576 [2024-04-26 14:03:05.197606] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.576 [2024-04-26 14:03:05.197640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.576 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.576 [2024-04-26 14:03:05.209620] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.576 [2024-04-26 14:03:05.209654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.576 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.576 [2024-04-26 14:03:05.221628] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.576 [2024-04-26 14:03:05.221663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.576 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.576 [2024-04-26 14:03:05.233639] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.576 [2024-04-26 14:03:05.233684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.576 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.576 [2024-04-26 14:03:05.245633] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.576 [2024-04-26 14:03:05.245670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.835 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.835 [2024-04-26 14:03:05.257608] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.835 [2024-04-26 14:03:05.257642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.835 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.835 [2024-04-26 14:03:05.269609] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.835 [2024-04-26 14:03:05.269642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.835 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.835 [2024-04-26 14:03:05.281616] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.835 [2024-04-26 14:03:05.281667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.835 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.835 [2024-04-26 14:03:05.293611] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.836 [2024-04-26 14:03:05.293645] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.836 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.836 [2024-04-26 14:03:05.305619] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.836 [2024-04-26 14:03:05.305653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.836 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.836 [2024-04-26 14:03:05.317599] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.836 [2024-04-26 14:03:05.317634] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.836 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.836 [2024-04-26 14:03:05.329596] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.836 [2024-04-26 14:03:05.329632] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.836 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.836 [2024-04-26 14:03:05.341591] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.836 [2024-04-26 14:03:05.341644] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.836 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.836 [2024-04-26 14:03:05.353613] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.836 [2024-04-26 14:03:05.353648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.836 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.836 [2024-04-26 14:03:05.365656] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.836 [2024-04-26 14:03:05.365696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.836 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.836 [2024-04-26 14:03:05.377618] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.836 [2024-04-26 14:03:05.377653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.836 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.836 [2024-04-26 14:03:05.389594] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.836 [2024-04-26 14:03:05.389628] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.836 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.836 [2024-04-26 14:03:05.401668] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.836 [2024-04-26 14:03:05.401705] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.836 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.836 [2024-04-26 14:03:05.413629] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.836 [2024-04-26 14:03:05.413669] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.836 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.836 [2024-04-26 14:03:05.425583] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.836 [2024-04-26 14:03:05.425615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.836 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.836 [2024-04-26 14:03:05.441610] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.836 [2024-04-26 14:03:05.441642] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.836 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.836 [2024-04-26 14:03:05.453593] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.836 [2024-04-26 14:03:05.453626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.836 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.836 [2024-04-26 14:03:05.465631] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.836 [2024-04-26 14:03:05.465673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.836 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.836 [2024-04-26 14:03:05.477618] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.836 [2024-04-26 14:03:05.477655] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.836 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.836 [2024-04-26 14:03:05.489619] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.836 [2024-04-26 14:03:05.489652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.836 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:25.836 [2024-04-26 14:03:05.501627] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:25.836 [2024-04-26 14:03:05.501663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:25.836 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.096 [2024-04-26 14:03:05.513662] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.096 [2024-04-26 14:03:05.513701] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.096 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.096 [2024-04-26 14:03:05.525591] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.096 [2024-04-26 14:03:05.525627] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.096 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.096 [2024-04-26 14:03:05.537628] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.096 [2024-04-26 14:03:05.537664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.096 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.096 [2024-04-26 14:03:05.549599] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.096 [2024-04-26 14:03:05.549633] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.096 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.096 [2024-04-26 14:03:05.561630] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.096 [2024-04-26 14:03:05.561669] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.096 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.096 [2024-04-26 14:03:05.573662] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.096 [2024-04-26 14:03:05.573697] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.096 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.096 [2024-04-26 14:03:05.585596] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.096 [2024-04-26 14:03:05.585632] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.096 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.096 [2024-04-26 14:03:05.597613] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.096 [2024-04-26 14:03:05.597647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.096 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.096 [2024-04-26 14:03:05.609603] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.096 [2024-04-26 14:03:05.609639] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.096 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.096 [2024-04-26 14:03:05.621639] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.096 [2024-04-26 14:03:05.621690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.096 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.096 [2024-04-26 14:03:05.633622] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.096 [2024-04-26 14:03:05.633659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.096 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.096 [2024-04-26 14:03:05.645629] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.096 [2024-04-26 14:03:05.645665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.096 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.096 [2024-04-26 14:03:05.657603] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.096 [2024-04-26 14:03:05.657638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.096 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.096 [2024-04-26 14:03:05.669616] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.096 [2024-04-26 14:03:05.669650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.096 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.096 [2024-04-26 14:03:05.681588] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.096 [2024-04-26 14:03:05.681623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.096 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.096 [2024-04-26 14:03:05.693623] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.096 [2024-04-26 14:03:05.693670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.096 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.096 [2024-04-26 14:03:05.709664] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.097 [2024-04-26 14:03:05.709709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.097 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.097 [2024-04-26 14:03:05.721621] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.097 [2024-04-26 14:03:05.721661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.097 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.097 [2024-04-26 14:03:05.733639] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.097 [2024-04-26 14:03:05.733677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.097 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.097 [2024-04-26 14:03:05.745655] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.097 [2024-04-26 14:03:05.745691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.097 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.097 [2024-04-26 14:03:05.757622] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.097 [2024-04-26 14:03:05.757658] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.097 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.097 [2024-04-26 14:03:05.769634] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.097 [2024-04-26 14:03:05.769689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.356 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.356 [2024-04-26 14:03:05.781613] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.356 [2024-04-26 14:03:05.781755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.356 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.356 [2024-04-26 14:03:05.793637] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.356 [2024-04-26 14:03:05.793898] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.356 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.356 [2024-04-26 14:03:05.805652] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.356 [2024-04-26 14:03:05.805751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.356 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.356 [2024-04-26 14:03:05.817600] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.356 [2024-04-26 14:03:05.817711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.356 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.356 [2024-04-26 14:03:05.829621] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.356 [2024-04-26 14:03:05.829723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.356 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.356 [2024-04-26 14:03:05.841596] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.356 [2024-04-26 14:03:05.841695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.356 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.356 [2024-04-26 14:03:05.853586] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.357 [2024-04-26 14:03:05.853690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.357 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.357 [2024-04-26 14:03:05.865596] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.357 [2024-04-26 14:03:05.865722] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.357 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.357 [2024-04-26 14:03:05.877590] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.357 [2024-04-26 14:03:05.877690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.357 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.357 [2024-04-26 14:03:05.889629] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.357 [2024-04-26 14:03:05.889758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.357 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.357 [2024-04-26 14:03:05.901621] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.357 [2024-04-26 14:03:05.901743] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.357 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.357 [2024-04-26 14:03:05.913610] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:20:26.357 [2024-04-26 14:03:05.913712] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:26.357 2024/04/26 14:03:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:26.357 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (76636) - No such process 00:20:26.357 14:03:05 -- target/zcopy.sh@49 -- # wait 76636 00:20:26.357 14:03:05 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:26.357 14:03:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.357 14:03:05 -- common/autotest_common.sh@10 -- # set +x 00:20:26.357 14:03:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.357 14:03:05 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:26.357 14:03:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.357 14:03:05 -- common/autotest_common.sh@10 -- # set +x 00:20:26.357 delay0 00:20:26.357 14:03:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.357 14:03:05 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:20:26.357 14:03:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.357 14:03:05 -- common/autotest_common.sh@10 -- # set +x 00:20:26.357 14:03:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.357 14:03:05 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:20:26.615 [2024-04-26 14:03:06.186080] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:20:33.183 Initializing NVMe Controllers 00:20:33.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:33.183 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:33.183 Initialization complete. Launching workers. 00:20:33.183 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 875 00:20:33.183 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1158, failed to submit 37 00:20:33.183 success 977, unsuccess 181, failed 0 00:20:33.183 14:03:12 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:20:33.183 14:03:12 -- target/zcopy.sh@60 -- # nvmftestfini 00:20:33.183 14:03:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:33.183 14:03:12 -- nvmf/common.sh@117 -- # sync 00:20:33.183 14:03:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:33.183 14:03:12 -- nvmf/common.sh@120 -- # set +e 00:20:33.183 14:03:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:33.183 14:03:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:33.183 rmmod nvme_tcp 00:20:33.183 rmmod nvme_fabrics 00:20:33.183 rmmod nvme_keyring 00:20:33.183 14:03:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:33.183 14:03:12 -- nvmf/common.sh@124 -- # set -e 00:20:33.183 14:03:12 -- nvmf/common.sh@125 -- # return 0 00:20:33.183 14:03:12 -- nvmf/common.sh@478 -- # '[' -n 76451 ']' 00:20:33.183 14:03:12 -- nvmf/common.sh@479 -- # killprocess 76451 00:20:33.183 14:03:12 -- common/autotest_common.sh@936 -- # '[' -z 76451 ']' 00:20:33.183 14:03:12 -- common/autotest_common.sh@940 -- # kill -0 76451 00:20:33.183 14:03:12 -- common/autotest_common.sh@941 -- # uname 00:20:33.183 14:03:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:33.183 14:03:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76451 00:20:33.183 14:03:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:33.183 14:03:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:33.183 14:03:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76451' 00:20:33.183 killing process with pid 76451 00:20:33.183 14:03:12 -- common/autotest_common.sh@955 -- # kill 76451 00:20:33.183 14:03:12 -- common/autotest_common.sh@960 -- # wait 76451 00:20:34.559 14:03:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:34.559 14:03:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:34.559 14:03:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:34.559 14:03:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:34.559 14:03:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:34.559 14:03:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.559 14:03:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.559 14:03:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.559 14:03:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:34.559 00:20:34.559 real 0m29.438s 00:20:34.559 user 0m48.301s 00:20:34.559 sys 0m7.991s 00:20:34.559 14:03:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:34.559 ************************************ 00:20:34.559 END TEST nvmf_zcopy 00:20:34.559 ************************************ 00:20:34.559 14:03:13 -- common/autotest_common.sh@10 -- # set +x 00:20:34.559 14:03:14 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:34.559 14:03:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:34.559 14:03:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:34.559 14:03:14 -- common/autotest_common.sh@10 -- # set +x 00:20:34.559 ************************************ 00:20:34.559 START TEST nvmf_nmic 00:20:34.559 ************************************ 00:20:34.559 14:03:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:20:34.820 * Looking for test storage... 00:20:34.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:34.820 14:03:14 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:34.820 14:03:14 -- nvmf/common.sh@7 -- # uname -s 00:20:34.820 14:03:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.820 14:03:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.820 14:03:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.820 14:03:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.820 14:03:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.820 14:03:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.820 14:03:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.820 14:03:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.820 14:03:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.820 14:03:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.820 14:03:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:20:34.820 14:03:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:20:34.820 14:03:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.820 14:03:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.820 14:03:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:34.820 14:03:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.820 14:03:14 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:34.820 14:03:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.820 14:03:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.820 14:03:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.820 14:03:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.820 14:03:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.820 14:03:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.820 14:03:14 -- paths/export.sh@5 -- # export PATH 00:20:34.820 14:03:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.820 14:03:14 -- nvmf/common.sh@47 -- # : 0 00:20:34.820 14:03:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:34.820 14:03:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:34.820 14:03:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.820 14:03:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.820 14:03:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.820 14:03:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.820 14:03:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.820 14:03:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.820 14:03:14 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:34.820 14:03:14 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:34.820 14:03:14 -- target/nmic.sh@14 -- # nvmftestinit 00:20:34.820 14:03:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:34.820 14:03:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.820 14:03:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:34.820 14:03:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:34.820 14:03:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:34.820 14:03:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.820 14:03:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.820 14:03:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.820 14:03:14 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:34.820 14:03:14 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:34.820 14:03:14 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:34.820 14:03:14 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:34.820 14:03:14 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:34.820 14:03:14 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:34.820 14:03:14 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.820 14:03:14 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.820 14:03:14 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:34.820 14:03:14 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:34.820 14:03:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:34.820 14:03:14 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:34.820 14:03:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:34.820 14:03:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.820 14:03:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:34.820 14:03:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:34.820 14:03:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:34.820 14:03:14 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:34.820 14:03:14 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:34.820 14:03:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:34.820 Cannot find device "nvmf_tgt_br" 00:20:34.820 14:03:14 -- nvmf/common.sh@155 -- # true 00:20:34.820 14:03:14 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:34.820 Cannot find device "nvmf_tgt_br2" 00:20:34.820 14:03:14 -- nvmf/common.sh@156 -- # true 00:20:34.820 14:03:14 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:34.820 14:03:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:34.820 Cannot find device "nvmf_tgt_br" 00:20:34.820 14:03:14 -- nvmf/common.sh@158 -- # true 00:20:34.820 14:03:14 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:34.820 Cannot find device "nvmf_tgt_br2" 00:20:34.820 14:03:14 -- nvmf/common.sh@159 -- # true 00:20:34.820 14:03:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:34.820 14:03:14 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:34.820 14:03:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:34.820 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:34.820 14:03:14 -- nvmf/common.sh@162 -- # true 00:20:34.820 14:03:14 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:34.820 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:34.820 14:03:14 -- nvmf/common.sh@163 -- # true 00:20:34.820 14:03:14 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:34.820 14:03:14 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:34.820 14:03:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:35.080 14:03:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:35.080 14:03:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:35.080 14:03:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:35.080 14:03:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:35.080 14:03:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:35.080 14:03:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:35.080 14:03:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:35.080 14:03:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:35.080 14:03:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:35.080 14:03:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:35.080 14:03:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:35.080 14:03:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:35.080 14:03:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:35.080 14:03:14 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:35.080 14:03:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:35.080 14:03:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:35.080 14:03:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:35.080 14:03:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:35.080 14:03:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:35.080 14:03:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:35.080 14:03:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:35.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:20:35.080 00:20:35.080 --- 10.0.0.2 ping statistics --- 00:20:35.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.080 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:20:35.080 14:03:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:35.080 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:35.080 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:20:35.080 00:20:35.080 --- 10.0.0.3 ping statistics --- 00:20:35.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.080 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:20:35.080 14:03:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:35.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:20:35.080 00:20:35.080 --- 10.0.0.1 ping statistics --- 00:20:35.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.080 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:35.080 14:03:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.080 14:03:14 -- nvmf/common.sh@422 -- # return 0 00:20:35.080 14:03:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:35.080 14:03:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.080 14:03:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:35.080 14:03:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:35.080 14:03:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.080 14:03:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:35.080 14:03:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:35.080 14:03:14 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:20:35.080 14:03:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:35.080 14:03:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:35.080 14:03:14 -- common/autotest_common.sh@10 -- # set +x 00:20:35.080 14:03:14 -- nvmf/common.sh@470 -- # nvmfpid=76997 00:20:35.080 14:03:14 -- nvmf/common.sh@471 -- # waitforlisten 76997 00:20:35.080 14:03:14 -- common/autotest_common.sh@817 -- # '[' -z 76997 ']' 00:20:35.080 14:03:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.080 14:03:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:35.080 14:03:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:35.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.080 14:03:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.080 14:03:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:35.080 14:03:14 -- common/autotest_common.sh@10 -- # set +x 00:20:35.339 [2024-04-26 14:03:14.820018] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:35.339 [2024-04-26 14:03:14.820589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.339 [2024-04-26 14:03:14.997249] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:35.598 [2024-04-26 14:03:15.240478] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.598 [2024-04-26 14:03:15.240531] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.598 [2024-04-26 14:03:15.240547] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.598 [2024-04-26 14:03:15.240558] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.598 [2024-04-26 14:03:15.240571] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.598 [2024-04-26 14:03:15.241067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.598 [2024-04-26 14:03:15.241234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.598 [2024-04-26 14:03:15.241454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.598 [2024-04-26 14:03:15.241518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.165 14:03:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:36.165 14:03:15 -- common/autotest_common.sh@850 -- # return 0 00:20:36.165 14:03:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:36.165 14:03:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:36.165 14:03:15 -- common/autotest_common.sh@10 -- # set +x 00:20:36.165 14:03:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.165 14:03:15 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:36.165 14:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.165 14:03:15 -- common/autotest_common.sh@10 -- # set +x 00:20:36.165 [2024-04-26 14:03:15.738637] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.165 14:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.165 14:03:15 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:36.165 14:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.165 14:03:15 -- common/autotest_common.sh@10 -- # set +x 00:20:36.424 Malloc0 00:20:36.424 14:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.424 14:03:15 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:36.424 14:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.424 14:03:15 -- common/autotest_common.sh@10 -- # set +x 00:20:36.424 14:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.425 14:03:15 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:36.425 14:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.425 14:03:15 -- common/autotest_common.sh@10 -- # set +x 00:20:36.425 14:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.425 14:03:15 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:36.425 14:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.425 14:03:15 -- common/autotest_common.sh@10 -- # set +x 00:20:36.425 [2024-04-26 14:03:15.883998] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.425 test case1: single bdev can't be used in multiple subsystems 00:20:36.425 14:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.425 14:03:15 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:20:36.425 14:03:15 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:36.425 14:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.425 14:03:15 -- common/autotest_common.sh@10 -- # set +x 00:20:36.425 14:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.425 14:03:15 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:36.425 14:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.425 14:03:15 -- common/autotest_common.sh@10 -- # set +x 00:20:36.425 14:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.425 14:03:15 -- target/nmic.sh@28 -- # nmic_status=0 00:20:36.425 14:03:15 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:20:36.425 14:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.425 14:03:15 -- common/autotest_common.sh@10 -- # set +x 00:20:36.425 [2024-04-26 14:03:15.919773] bdev.c:8005:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:20:36.425 [2024-04-26 14:03:15.919826] subsystem.c:1940:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:20:36.425 [2024-04-26 14:03:15.919842] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:36.425 2024/04/26 14:03:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:20:36.425 request: 00:20:36.425 { 00:20:36.425 "method": "nvmf_subsystem_add_ns", 00:20:36.425 "params": { 00:20:36.425 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:36.425 "namespace": { 00:20:36.425 "bdev_name": "Malloc0", 00:20:36.425 "no_auto_visible": false 00:20:36.425 } 00:20:36.425 } 00:20:36.425 } 00:20:36.425 Got JSON-RPC error response 00:20:36.425 GoRPCClient: error on JSON-RPC call 00:20:36.425 Adding namespace failed - expected result. 00:20:36.425 test case2: host connect to nvmf target in multiple paths 00:20:36.425 14:03:15 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:36.425 14:03:15 -- target/nmic.sh@29 -- # nmic_status=1 00:20:36.425 14:03:15 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:20:36.425 14:03:15 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:20:36.425 14:03:15 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:20:36.425 14:03:15 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:36.425 14:03:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.425 14:03:15 -- common/autotest_common.sh@10 -- # set +x 00:20:36.425 [2024-04-26 14:03:15.939895] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:36.425 14:03:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.425 14:03:15 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:36.683 14:03:16 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:20:36.683 14:03:16 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:20:36.683 14:03:16 -- common/autotest_common.sh@1184 -- # local i=0 00:20:36.683 14:03:16 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:36.683 14:03:16 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:36.683 14:03:16 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:39.214 14:03:18 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:39.214 14:03:18 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:39.214 14:03:18 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:39.214 14:03:18 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:39.214 14:03:18 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:39.214 14:03:18 -- common/autotest_common.sh@1194 -- # return 0 00:20:39.214 14:03:18 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:39.214 [global] 00:20:39.214 thread=1 00:20:39.214 invalidate=1 00:20:39.214 rw=write 00:20:39.214 time_based=1 00:20:39.215 runtime=1 00:20:39.215 ioengine=libaio 00:20:39.215 direct=1 00:20:39.215 bs=4096 00:20:39.215 iodepth=1 00:20:39.215 norandommap=0 00:20:39.215 numjobs=1 00:20:39.215 00:20:39.215 verify_dump=1 00:20:39.215 verify_backlog=512 00:20:39.215 verify_state_save=0 00:20:39.215 do_verify=1 00:20:39.215 verify=crc32c-intel 00:20:39.215 [job0] 00:20:39.215 filename=/dev/nvme0n1 00:20:39.215 Could not set queue depth (nvme0n1) 00:20:39.215 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:39.215 fio-3.35 00:20:39.215 Starting 1 thread 00:20:40.150 00:20:40.150 job0: (groupid=0, jobs=1): err= 0: pid=77102: Fri Apr 26 14:03:19 2024 00:20:40.150 read: IOPS=3684, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1001msec) 00:20:40.150 slat (nsec): min=8205, max=33962, avg=9232.98, stdev=1889.33 00:20:40.150 clat (usec): min=111, max=400, avg=135.39, stdev=12.25 00:20:40.150 lat (usec): min=120, max=409, avg=144.62, stdev=12.59 00:20:40.150 clat percentiles (usec): 00:20:40.150 | 1.00th=[ 117], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 126], 00:20:40.150 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:20:40.150 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:20:40.150 | 99.00th=[ 172], 99.50th=[ 180], 99.90th=[ 200], 99.95th=[ 206], 00:20:40.150 | 99.99th=[ 400] 00:20:40.150 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:20:40.150 slat (nsec): min=12480, max=70210, avg=14213.52, stdev=2982.56 00:20:40.150 clat (usec): min=79, max=446, avg=97.99, stdev=11.44 00:20:40.150 lat (usec): min=92, max=460, avg=112.20, stdev=12.21 00:20:40.150 clat percentiles (usec): 00:20:40.150 | 1.00th=[ 83], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 90], 00:20:40.150 | 30.00th=[ 92], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 98], 00:20:40.150 | 70.00th=[ 101], 80.00th=[ 105], 90.00th=[ 112], 95.00th=[ 117], 00:20:40.150 | 99.00th=[ 129], 99.50th=[ 133], 99.90th=[ 149], 99.95th=[ 239], 00:20:40.150 | 99.99th=[ 449] 00:20:40.150 bw ( KiB/s): min=16384, max=16384, per=100.00%, avg=16384.00, stdev= 0.00, samples=1 00:20:40.150 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:20:40.150 lat (usec) : 100=34.92%, 250=65.06%, 500=0.03% 00:20:40.150 cpu : usr=1.70%, sys=7.00%, ctx=7784, majf=0, minf=2 00:20:40.150 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:40.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.150 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.150 issued rwts: total=3688,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:40.150 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:40.150 00:20:40.150 Run status group 0 (all jobs): 00:20:40.150 READ: bw=14.4MiB/s (15.1MB/s), 14.4MiB/s-14.4MiB/s (15.1MB/s-15.1MB/s), io=14.4MiB (15.1MB), run=1001-1001msec 00:20:40.150 WRITE: bw=16.0MiB/s (16.8MB/s), 16.0MiB/s-16.0MiB/s (16.8MB/s-16.8MB/s), io=16.0MiB (16.8MB), run=1001-1001msec 00:20:40.150 00:20:40.150 Disk stats (read/write): 00:20:40.150 nvme0n1: ios=3443/3584, merge=0/0, ticks=504/390, in_queue=894, util=91.78% 00:20:40.150 14:03:19 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:40.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:40.409 14:03:19 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:40.409 14:03:19 -- common/autotest_common.sh@1205 -- # local i=0 00:20:40.409 14:03:19 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:40.409 14:03:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:40.409 14:03:19 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:40.409 14:03:19 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:40.409 14:03:19 -- common/autotest_common.sh@1217 -- # return 0 00:20:40.409 14:03:19 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:40.409 14:03:19 -- target/nmic.sh@53 -- # nvmftestfini 00:20:40.409 14:03:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:40.409 14:03:19 -- nvmf/common.sh@117 -- # sync 00:20:40.409 14:03:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:40.409 14:03:19 -- nvmf/common.sh@120 -- # set +e 00:20:40.409 14:03:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:40.409 14:03:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:40.409 rmmod nvme_tcp 00:20:40.409 rmmod nvme_fabrics 00:20:40.409 rmmod nvme_keyring 00:20:40.409 14:03:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:40.409 14:03:20 -- nvmf/common.sh@124 -- # set -e 00:20:40.409 14:03:20 -- nvmf/common.sh@125 -- # return 0 00:20:40.409 14:03:20 -- nvmf/common.sh@478 -- # '[' -n 76997 ']' 00:20:40.409 14:03:20 -- nvmf/common.sh@479 -- # killprocess 76997 00:20:40.409 14:03:20 -- common/autotest_common.sh@936 -- # '[' -z 76997 ']' 00:20:40.409 14:03:20 -- common/autotest_common.sh@940 -- # kill -0 76997 00:20:40.409 14:03:20 -- common/autotest_common.sh@941 -- # uname 00:20:40.409 14:03:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:40.409 14:03:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76997 00:20:40.409 killing process with pid 76997 00:20:40.409 14:03:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:40.409 14:03:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:40.409 14:03:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76997' 00:20:40.409 14:03:20 -- common/autotest_common.sh@955 -- # kill 76997 00:20:40.409 14:03:20 -- common/autotest_common.sh@960 -- # wait 76997 00:20:42.314 14:03:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:42.314 14:03:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:42.314 14:03:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:42.314 14:03:21 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:42.314 14:03:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:42.314 14:03:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.314 14:03:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.314 14:03:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.314 14:03:21 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:42.314 00:20:42.314 real 0m7.456s 00:20:42.314 user 0m23.294s 00:20:42.314 sys 0m1.820s 00:20:42.314 14:03:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:42.314 14:03:21 -- common/autotest_common.sh@10 -- # set +x 00:20:42.314 ************************************ 00:20:42.314 END TEST nvmf_nmic 00:20:42.314 ************************************ 00:20:42.314 14:03:21 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:42.314 14:03:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:42.314 14:03:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:42.314 14:03:21 -- common/autotest_common.sh@10 -- # set +x 00:20:42.314 ************************************ 00:20:42.314 START TEST nvmf_fio_target 00:20:42.314 ************************************ 00:20:42.314 14:03:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:20:42.314 * Looking for test storage... 00:20:42.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:42.314 14:03:21 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:42.314 14:03:21 -- nvmf/common.sh@7 -- # uname -s 00:20:42.314 14:03:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.314 14:03:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.314 14:03:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.314 14:03:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.314 14:03:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.314 14:03:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.314 14:03:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.314 14:03:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.314 14:03:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.314 14:03:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.314 14:03:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:20:42.314 14:03:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:20:42.314 14:03:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.314 14:03:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.314 14:03:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:42.314 14:03:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.314 14:03:21 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:42.314 14:03:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.314 14:03:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.314 14:03:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.314 14:03:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.314 14:03:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.314 14:03:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.314 14:03:21 -- paths/export.sh@5 -- # export PATH 00:20:42.314 14:03:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.314 14:03:21 -- nvmf/common.sh@47 -- # : 0 00:20:42.314 14:03:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:42.314 14:03:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:42.314 14:03:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.314 14:03:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.314 14:03:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.314 14:03:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:42.314 14:03:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:42.315 14:03:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:42.315 14:03:21 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:42.315 14:03:21 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:42.315 14:03:21 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:42.315 14:03:21 -- target/fio.sh@16 -- # nvmftestinit 00:20:42.315 14:03:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:42.315 14:03:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.315 14:03:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:42.315 14:03:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:42.315 14:03:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:42.315 14:03:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.315 14:03:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.315 14:03:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.315 14:03:21 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:42.315 14:03:21 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:42.315 14:03:21 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:42.315 14:03:21 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:42.315 14:03:21 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:42.315 14:03:21 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:42.315 14:03:21 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.315 14:03:21 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.315 14:03:21 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:42.315 14:03:21 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:42.315 14:03:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:42.315 14:03:21 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:42.315 14:03:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:42.315 14:03:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.315 14:03:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:42.315 14:03:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:42.315 14:03:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:42.315 14:03:21 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:42.315 14:03:21 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:42.315 14:03:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:42.315 Cannot find device "nvmf_tgt_br" 00:20:42.315 14:03:21 -- nvmf/common.sh@155 -- # true 00:20:42.315 14:03:21 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:42.574 Cannot find device "nvmf_tgt_br2" 00:20:42.574 14:03:21 -- nvmf/common.sh@156 -- # true 00:20:42.574 14:03:21 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:42.574 14:03:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:42.574 Cannot find device "nvmf_tgt_br" 00:20:42.574 14:03:22 -- nvmf/common.sh@158 -- # true 00:20:42.574 14:03:22 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:42.574 Cannot find device "nvmf_tgt_br2" 00:20:42.574 14:03:22 -- nvmf/common.sh@159 -- # true 00:20:42.574 14:03:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:42.574 14:03:22 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:42.574 14:03:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:42.574 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:42.574 14:03:22 -- nvmf/common.sh@162 -- # true 00:20:42.574 14:03:22 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:42.574 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:42.574 14:03:22 -- nvmf/common.sh@163 -- # true 00:20:42.574 14:03:22 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:42.574 14:03:22 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:42.574 14:03:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:42.574 14:03:22 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:42.574 14:03:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:42.574 14:03:22 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:42.574 14:03:22 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:42.574 14:03:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:42.574 14:03:22 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:42.834 14:03:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:42.834 14:03:22 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:42.834 14:03:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:42.834 14:03:22 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:42.834 14:03:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:42.834 14:03:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:42.834 14:03:22 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:42.834 14:03:22 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:42.834 14:03:22 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:42.834 14:03:22 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:42.834 14:03:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:42.834 14:03:22 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:42.834 14:03:22 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:42.834 14:03:22 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:42.834 14:03:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:42.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:20:42.834 00:20:42.834 --- 10.0.0.2 ping statistics --- 00:20:42.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.834 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:20:42.834 14:03:22 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:42.834 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:42.834 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:20:42.834 00:20:42.834 --- 10.0.0.3 ping statistics --- 00:20:42.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.834 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:20:42.834 14:03:22 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:42.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:20:42.834 00:20:42.834 --- 10.0.0.1 ping statistics --- 00:20:42.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.834 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:42.834 14:03:22 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.834 14:03:22 -- nvmf/common.sh@422 -- # return 0 00:20:42.834 14:03:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:42.834 14:03:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.834 14:03:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:42.834 14:03:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:42.834 14:03:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.834 14:03:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:42.834 14:03:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:42.834 14:03:22 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:42.834 14:03:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:42.834 14:03:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:42.834 14:03:22 -- common/autotest_common.sh@10 -- # set +x 00:20:42.834 14:03:22 -- nvmf/common.sh@470 -- # nvmfpid=77310 00:20:42.834 14:03:22 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:42.834 14:03:22 -- nvmf/common.sh@471 -- # waitforlisten 77310 00:20:42.834 14:03:22 -- common/autotest_common.sh@817 -- # '[' -z 77310 ']' 00:20:42.834 14:03:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.834 14:03:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:42.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.834 14:03:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.834 14:03:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:42.834 14:03:22 -- common/autotest_common.sh@10 -- # set +x 00:20:42.834 [2024-04-26 14:03:22.489233] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:20:42.834 [2024-04-26 14:03:22.489768] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.093 [2024-04-26 14:03:22.665886] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:43.353 [2024-04-26 14:03:22.909606] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.353 [2024-04-26 14:03:22.909656] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.353 [2024-04-26 14:03:22.909673] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.353 [2024-04-26 14:03:22.909685] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.353 [2024-04-26 14:03:22.909698] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.353 [2024-04-26 14:03:22.909896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.353 [2024-04-26 14:03:22.910032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.353 [2024-04-26 14:03:22.910775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.353 [2024-04-26 14:03:22.910793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.920 14:03:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:43.920 14:03:23 -- common/autotest_common.sh@850 -- # return 0 00:20:43.920 14:03:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:43.920 14:03:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:43.920 14:03:23 -- common/autotest_common.sh@10 -- # set +x 00:20:43.920 14:03:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.920 14:03:23 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:44.179 [2024-04-26 14:03:23.600755] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.179 14:03:23 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:44.438 14:03:23 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:44.438 14:03:23 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:44.696 14:03:24 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:44.696 14:03:24 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:44.953 14:03:24 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:44.953 14:03:24 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:45.520 14:03:24 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:45.520 14:03:24 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:45.520 14:03:25 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:45.780 14:03:25 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:45.780 14:03:25 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:46.039 14:03:25 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:46.039 14:03:25 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:46.607 14:03:25 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:46.607 14:03:25 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:46.607 14:03:26 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:46.866 14:03:26 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:46.866 14:03:26 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:47.123 14:03:26 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:47.123 14:03:26 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:47.381 14:03:26 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:47.639 [2024-04-26 14:03:27.078039] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.639 14:03:27 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:47.639 14:03:27 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:47.898 14:03:27 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:48.156 14:03:27 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:48.156 14:03:27 -- common/autotest_common.sh@1184 -- # local i=0 00:20:48.156 14:03:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:48.156 14:03:27 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:20:48.156 14:03:27 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:20:48.156 14:03:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:50.057 14:03:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:50.057 14:03:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:50.057 14:03:29 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:50.057 14:03:29 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:20:50.057 14:03:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:50.057 14:03:29 -- common/autotest_common.sh@1194 -- # return 0 00:20:50.057 14:03:29 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:50.057 [global] 00:20:50.057 thread=1 00:20:50.057 invalidate=1 00:20:50.057 rw=write 00:20:50.057 time_based=1 00:20:50.057 runtime=1 00:20:50.057 ioengine=libaio 00:20:50.057 direct=1 00:20:50.057 bs=4096 00:20:50.057 iodepth=1 00:20:50.057 norandommap=0 00:20:50.057 numjobs=1 00:20:50.057 00:20:50.057 verify_dump=1 00:20:50.057 verify_backlog=512 00:20:50.057 verify_state_save=0 00:20:50.057 do_verify=1 00:20:50.057 verify=crc32c-intel 00:20:50.057 [job0] 00:20:50.057 filename=/dev/nvme0n1 00:20:50.057 [job1] 00:20:50.057 filename=/dev/nvme0n2 00:20:50.057 [job2] 00:20:50.057 filename=/dev/nvme0n3 00:20:50.057 [job3] 00:20:50.057 filename=/dev/nvme0n4 00:20:50.350 Could not set queue depth (nvme0n1) 00:20:50.350 Could not set queue depth (nvme0n2) 00:20:50.350 Could not set queue depth (nvme0n3) 00:20:50.350 Could not set queue depth (nvme0n4) 00:20:50.350 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:50.350 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:50.350 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:50.350 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:50.350 fio-3.35 00:20:50.350 Starting 4 threads 00:20:51.736 00:20:51.736 job0: (groupid=0, jobs=1): err= 0: pid=77601: Fri Apr 26 14:03:31 2024 00:20:51.736 read: IOPS=1455, BW=5822KiB/s (5962kB/s)(5828KiB/1001msec) 00:20:51.736 slat (nsec): min=8321, max=86806, avg=23362.07, stdev=12015.23 00:20:51.736 clat (usec): min=177, max=1070, avg=375.33, stdev=76.75 00:20:51.736 lat (usec): min=203, max=1103, avg=398.69, stdev=85.29 00:20:51.736 clat percentiles (usec): 00:20:51.736 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 302], 00:20:51.736 | 30.00th=[ 310], 40.00th=[ 326], 50.00th=[ 375], 60.00th=[ 408], 00:20:51.736 | 70.00th=[ 429], 80.00th=[ 445], 90.00th=[ 465], 95.00th=[ 478], 00:20:51.736 | 99.00th=[ 586], 99.50th=[ 660], 99.90th=[ 758], 99.95th=[ 1074], 00:20:51.736 | 99.99th=[ 1074] 00:20:51.736 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:20:51.736 slat (usec): min=14, max=124, avg=35.58, stdev= 9.79 00:20:51.736 clat (usec): min=122, max=1103, avg=232.51, stdev=41.80 00:20:51.736 lat (usec): min=143, max=1134, avg=268.09, stdev=41.66 00:20:51.736 clat percentiles (usec): 00:20:51.736 | 1.00th=[ 141], 5.00th=[ 190], 10.00th=[ 202], 20.00th=[ 215], 00:20:51.736 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:20:51.736 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 273], 00:20:51.736 | 99.00th=[ 302], 99.50th=[ 383], 99.90th=[ 947], 99.95th=[ 1106], 00:20:51.736 | 99.99th=[ 1106] 00:20:51.736 bw ( KiB/s): min= 8192, max= 8192, per=21.35%, avg=8192.00, stdev= 0.00, samples=1 00:20:51.736 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:51.736 lat (usec) : 250=41.63%, 500=57.27%, 750=0.90%, 1000=0.13% 00:20:51.736 lat (msec) : 2=0.07% 00:20:51.736 cpu : usr=1.50%, sys=7.00%, ctx=2993, majf=0, minf=9 00:20:51.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:51.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.736 issued rwts: total=1457,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:51.736 job1: (groupid=0, jobs=1): err= 0: pid=77602: Fri Apr 26 14:03:31 2024 00:20:51.736 read: IOPS=2757, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:20:51.736 slat (nsec): min=8187, max=35418, avg=10306.83, stdev=3287.18 00:20:51.736 clat (usec): min=137, max=1660, avg=180.40, stdev=52.12 00:20:51.736 lat (usec): min=146, max=1669, avg=190.70, stdev=52.42 00:20:51.736 clat percentiles (usec): 00:20:51.736 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:20:51.736 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 172], 00:20:51.736 | 70.00th=[ 178], 80.00th=[ 194], 90.00th=[ 231], 95.00th=[ 253], 00:20:51.736 | 99.00th=[ 420], 99.50th=[ 445], 99.90th=[ 529], 99.95th=[ 668], 00:20:51.736 | 99.99th=[ 1663] 00:20:51.736 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:20:51.736 slat (usec): min=12, max=128, avg=16.40, stdev= 7.82 00:20:51.736 clat (usec): min=102, max=855, avg=135.78, stdev=25.57 00:20:51.736 lat (usec): min=114, max=869, avg=152.18, stdev=27.49 00:20:51.736 clat percentiles (usec): 00:20:51.736 | 1.00th=[ 110], 5.00th=[ 114], 10.00th=[ 117], 20.00th=[ 120], 00:20:51.736 | 30.00th=[ 123], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 135], 00:20:51.736 | 70.00th=[ 139], 80.00th=[ 147], 90.00th=[ 165], 95.00th=[ 184], 00:20:51.736 | 99.00th=[ 206], 99.50th=[ 223], 99.90th=[ 262], 99.95th=[ 441], 00:20:51.736 | 99.99th=[ 857] 00:20:51.736 bw ( KiB/s): min=12288, max=12288, per=32.02%, avg=12288.00, stdev= 0.00, samples=1 00:20:51.736 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:20:51.736 lat (usec) : 250=97.39%, 500=2.54%, 750=0.03%, 1000=0.02% 00:20:51.736 lat (msec) : 2=0.02% 00:20:51.736 cpu : usr=1.80%, sys=5.60%, ctx=5833, majf=0, minf=7 00:20:51.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:51.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.736 issued rwts: total=2760,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:51.736 job2: (groupid=0, jobs=1): err= 0: pid=77603: Fri Apr 26 14:03:31 2024 00:20:51.736 read: IOPS=1672, BW=6689KiB/s (6850kB/s)(6696KiB/1001msec) 00:20:51.736 slat (nsec): min=8372, max=99549, avg=14883.40, stdev=7988.04 00:20:51.736 clat (usec): min=146, max=449, avg=270.74, stdev=38.69 00:20:51.736 lat (usec): min=155, max=480, avg=285.62, stdev=42.60 00:20:51.736 clat percentiles (usec): 00:20:51.736 | 1.00th=[ 186], 5.00th=[ 196], 10.00th=[ 206], 20.00th=[ 247], 00:20:51.736 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 285], 00:20:51.736 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 322], 00:20:51.736 | 99.00th=[ 351], 99.50th=[ 383], 99.90th=[ 441], 99.95th=[ 449], 00:20:51.736 | 99.99th=[ 449] 00:20:51.736 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:20:51.736 slat (usec): min=13, max=118, avg=31.44, stdev=10.21 00:20:51.736 clat (usec): min=109, max=2431, avg=220.25, stdev=74.19 00:20:51.736 lat (usec): min=124, max=2469, avg=251.69, stdev=74.68 00:20:51.736 clat percentiles (usec): 00:20:51.736 | 1.00th=[ 130], 5.00th=[ 139], 10.00th=[ 145], 20.00th=[ 165], 00:20:51.736 | 30.00th=[ 208], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 233], 00:20:51.736 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 277], 00:20:51.736 | 99.00th=[ 412], 99.50th=[ 469], 99.90th=[ 889], 99.95th=[ 930], 00:20:51.736 | 99.99th=[ 2442] 00:20:51.736 bw ( KiB/s): min= 8192, max= 8192, per=21.35%, avg=8192.00, stdev= 0.00, samples=1 00:20:51.736 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:51.736 lat (usec) : 250=54.62%, 500=45.14%, 750=0.13%, 1000=0.08% 00:20:51.736 lat (msec) : 4=0.03% 00:20:51.736 cpu : usr=1.60%, sys=6.60%, ctx=3733, majf=0, minf=15 00:20:51.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:51.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.736 issued rwts: total=1674,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:51.736 job3: (groupid=0, jobs=1): err= 0: pid=77604: Fri Apr 26 14:03:31 2024 00:20:51.736 read: IOPS=2560, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1000msec) 00:20:51.736 slat (nsec): min=8231, max=46991, avg=12104.93, stdev=4410.74 00:20:51.736 clat (usec): min=152, max=2742, avg=185.38, stdev=52.64 00:20:51.736 lat (usec): min=160, max=2756, avg=197.49, stdev=53.09 00:20:51.736 clat percentiles (usec): 00:20:51.736 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 174], 00:20:51.736 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:20:51.736 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 212], 00:20:51.736 | 99.00th=[ 225], 99.50th=[ 233], 99.90th=[ 247], 99.95th=[ 322], 00:20:51.736 | 99.99th=[ 2737] 00:20:51.736 write: IOPS=2947, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1000msec); 0 zone resets 00:20:51.736 slat (usec): min=12, max=107, avg=19.43, stdev= 9.34 00:20:51.736 clat (usec): min=110, max=1998, avg=145.92, stdev=36.85 00:20:51.736 lat (usec): min=126, max=2019, avg=165.35, stdev=38.99 00:20:51.736 clat percentiles (usec): 00:20:51.736 | 1.00th=[ 122], 5.00th=[ 127], 10.00th=[ 131], 20.00th=[ 135], 00:20:51.736 | 30.00th=[ 137], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 147], 00:20:51.736 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 172], 00:20:51.736 | 99.00th=[ 188], 99.50th=[ 192], 99.90th=[ 219], 99.95th=[ 251], 00:20:51.736 | 99.99th=[ 1991] 00:20:51.736 bw ( KiB/s): min=12288, max=12288, per=32.02%, avg=12288.00, stdev= 0.00, samples=1 00:20:51.736 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:20:51.736 lat (usec) : 250=99.93%, 500=0.04% 00:20:51.736 lat (msec) : 2=0.02%, 4=0.02% 00:20:51.736 cpu : usr=1.80%, sys=6.40%, ctx=5507, majf=0, minf=4 00:20:51.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:51.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:51.736 issued rwts: total=2560,2947,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:51.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:51.736 00:20:51.736 Run status group 0 (all jobs): 00:20:51.736 READ: bw=33.0MiB/s (34.6MB/s), 5822KiB/s-10.8MiB/s (5962kB/s-11.3MB/s), io=33.0MiB (34.6MB), run=1000-1001msec 00:20:51.736 WRITE: bw=37.5MiB/s (39.3MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=37.5MiB (39.3MB), run=1000-1001msec 00:20:51.736 00:20:51.736 Disk stats (read/write): 00:20:51.736 nvme0n1: ios=1188/1536, merge=0/0, ticks=460/383, in_queue=843, util=88.28% 00:20:51.736 nvme0n2: ios=2466/2560, merge=0/0, ticks=513/374, in_queue=887, util=93.53% 00:20:51.736 nvme0n3: ios=1591/1694, merge=0/0, ticks=539/394, in_queue=933, util=94.23% 00:20:51.736 nvme0n4: ios=2197/2560, merge=0/0, ticks=418/396, in_queue=814, util=89.75% 00:20:51.736 14:03:31 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:51.736 [global] 00:20:51.736 thread=1 00:20:51.736 invalidate=1 00:20:51.736 rw=randwrite 00:20:51.736 time_based=1 00:20:51.736 runtime=1 00:20:51.736 ioengine=libaio 00:20:51.736 direct=1 00:20:51.736 bs=4096 00:20:51.736 iodepth=1 00:20:51.736 norandommap=0 00:20:51.736 numjobs=1 00:20:51.736 00:20:51.736 verify_dump=1 00:20:51.736 verify_backlog=512 00:20:51.736 verify_state_save=0 00:20:51.736 do_verify=1 00:20:51.736 verify=crc32c-intel 00:20:51.736 [job0] 00:20:51.736 filename=/dev/nvme0n1 00:20:51.736 [job1] 00:20:51.736 filename=/dev/nvme0n2 00:20:51.736 [job2] 00:20:51.736 filename=/dev/nvme0n3 00:20:51.736 [job3] 00:20:51.736 filename=/dev/nvme0n4 00:20:51.736 Could not set queue depth (nvme0n1) 00:20:51.736 Could not set queue depth (nvme0n2) 00:20:51.736 Could not set queue depth (nvme0n3) 00:20:51.736 Could not set queue depth (nvme0n4) 00:20:51.736 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:51.736 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:51.736 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:51.736 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:51.736 fio-3.35 00:20:51.736 Starting 4 threads 00:20:53.114 00:20:53.114 job0: (groupid=0, jobs=1): err= 0: pid=77664: Fri Apr 26 14:03:32 2024 00:20:53.114 read: IOPS=2984, BW=11.7MiB/s (12.2MB/s)(11.7MiB/1001msec) 00:20:53.114 slat (nsec): min=8262, max=29828, avg=9682.58, stdev=2228.56 00:20:53.114 clat (usec): min=139, max=1031, avg=170.59, stdev=30.91 00:20:53.114 lat (usec): min=147, max=1049, avg=180.28, stdev=31.51 00:20:53.114 clat percentiles (usec): 00:20:53.114 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 157], 00:20:53.114 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:20:53.114 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 196], 00:20:53.114 | 99.00th=[ 265], 99.50th=[ 334], 99.90th=[ 693], 99.95th=[ 725], 00:20:53.114 | 99.99th=[ 1029] 00:20:53.114 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:20:53.114 slat (usec): min=10, max=135, avg=15.88, stdev= 6.76 00:20:53.114 clat (usec): min=104, max=1882, avg=132.33, stdev=35.11 00:20:53.114 lat (usec): min=118, max=1896, avg=148.21, stdev=36.22 00:20:53.114 clat percentiles (usec): 00:20:53.114 | 1.00th=[ 113], 5.00th=[ 117], 10.00th=[ 119], 20.00th=[ 122], 00:20:53.114 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 133], 00:20:53.114 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 153], 00:20:53.114 | 99.00th=[ 172], 99.50th=[ 180], 99.90th=[ 318], 99.95th=[ 388], 00:20:53.114 | 99.99th=[ 1876] 00:20:53.114 bw ( KiB/s): min=12288, max=12288, per=30.60%, avg=12288.00, stdev= 0.00, samples=1 00:20:53.114 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:20:53.114 lat (usec) : 250=99.31%, 500=0.61%, 750=0.05% 00:20:53.114 lat (msec) : 2=0.03% 00:20:53.114 cpu : usr=0.90%, sys=6.50%, ctx=6059, majf=0, minf=13 00:20:53.114 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.114 issued rwts: total=2987,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.114 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:53.114 job1: (groupid=0, jobs=1): err= 0: pid=77665: Fri Apr 26 14:03:32 2024 00:20:53.114 read: IOPS=1700, BW=6801KiB/s (6964kB/s)(6808KiB/1001msec) 00:20:53.114 slat (nsec): min=6367, max=53676, avg=8314.33, stdev=2980.59 00:20:53.114 clat (usec): min=167, max=966, avg=287.84, stdev=34.10 00:20:53.114 lat (usec): min=174, max=975, avg=296.16, stdev=34.69 00:20:53.114 clat percentiles (usec): 00:20:53.114 | 1.00th=[ 202], 5.00th=[ 249], 10.00th=[ 258], 20.00th=[ 269], 00:20:53.114 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:20:53.114 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 330], 00:20:53.114 | 99.00th=[ 400], 99.50th=[ 424], 99.90th=[ 578], 99.95th=[ 963], 00:20:53.114 | 99.99th=[ 963] 00:20:53.114 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:20:53.114 slat (usec): min=7, max=220, avg=16.43, stdev=12.96 00:20:53.114 clat (usec): min=4, max=748, avg=223.91, stdev=36.93 00:20:53.114 lat (usec): min=130, max=760, avg=240.35, stdev=37.12 00:20:53.114 clat percentiles (usec): 00:20:53.114 | 1.00th=[ 129], 5.00th=[ 184], 10.00th=[ 194], 20.00th=[ 204], 00:20:53.114 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 227], 00:20:53.114 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 258], 95.00th=[ 273], 00:20:53.114 | 99.00th=[ 351], 99.50th=[ 367], 99.90th=[ 537], 99.95th=[ 594], 00:20:53.114 | 99.99th=[ 750] 00:20:53.114 bw ( KiB/s): min= 8192, max= 8192, per=20.40%, avg=8192.00, stdev= 0.00, samples=1 00:20:53.114 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:53.114 lat (usec) : 10=0.03%, 250=49.81%, 500=49.97%, 750=0.16%, 1000=0.03% 00:20:53.114 cpu : usr=1.00%, sys=3.80%, ctx=3768, majf=0, minf=11 00:20:53.114 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.114 issued rwts: total=1702,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.114 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:53.114 job2: (groupid=0, jobs=1): err= 0: pid=77666: Fri Apr 26 14:03:32 2024 00:20:53.114 read: IOPS=1700, BW=6801KiB/s (6964kB/s)(6808KiB/1001msec) 00:20:53.114 slat (nsec): min=7030, max=30003, avg=9380.21, stdev=2983.14 00:20:53.114 clat (usec): min=181, max=959, avg=286.50, stdev=33.69 00:20:53.114 lat (usec): min=195, max=966, avg=295.88, stdev=33.62 00:20:53.114 clat percentiles (usec): 00:20:53.114 | 1.00th=[ 210], 5.00th=[ 245], 10.00th=[ 255], 20.00th=[ 265], 00:20:53.114 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:20:53.114 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 318], 95.00th=[ 330], 00:20:53.114 | 99.00th=[ 388], 99.50th=[ 404], 99.90th=[ 586], 99.95th=[ 963], 00:20:53.114 | 99.99th=[ 963] 00:20:53.114 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:20:53.114 slat (usec): min=7, max=133, avg=16.84, stdev=11.15 00:20:53.114 clat (usec): min=110, max=806, avg=223.60, stdev=35.81 00:20:53.114 lat (usec): min=138, max=821, avg=240.44, stdev=35.10 00:20:53.114 clat percentiles (usec): 00:20:53.114 | 1.00th=[ 139], 5.00th=[ 182], 10.00th=[ 194], 20.00th=[ 204], 00:20:53.114 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 227], 00:20:53.114 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 260], 95.00th=[ 281], 00:20:53.114 | 99.00th=[ 338], 99.50th=[ 363], 99.90th=[ 408], 99.95th=[ 478], 00:20:53.114 | 99.99th=[ 807] 00:20:53.114 bw ( KiB/s): min= 8192, max= 8192, per=20.40%, avg=8192.00, stdev= 0.00, samples=1 00:20:53.114 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:20:53.114 lat (usec) : 250=49.89%, 500=50.00%, 750=0.05%, 1000=0.05% 00:20:53.114 cpu : usr=1.00%, sys=4.10%, ctx=3769, majf=0, minf=10 00:20:53.114 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.114 issued rwts: total=1702,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.114 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:53.114 job3: (groupid=0, jobs=1): err= 0: pid=77667: Fri Apr 26 14:03:32 2024 00:20:53.114 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:20:53.114 slat (nsec): min=8247, max=39996, avg=10024.03, stdev=2810.06 00:20:53.114 clat (usec): min=154, max=507, avg=188.83, stdev=17.46 00:20:53.114 lat (usec): min=163, max=520, avg=198.85, stdev=17.78 00:20:53.114 clat percentiles (usec): 00:20:53.114 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:20:53.114 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 190], 00:20:53.114 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 208], 95.00th=[ 217], 00:20:53.114 | 99.00th=[ 237], 99.50th=[ 249], 99.90th=[ 388], 99.95th=[ 506], 00:20:53.114 | 99.99th=[ 506] 00:20:53.114 write: IOPS=2878, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1001msec); 0 zone resets 00:20:53.114 slat (usec): min=12, max=113, avg=16.56, stdev= 6.74 00:20:53.114 clat (usec): min=118, max=5058, avg=151.84, stdev=92.90 00:20:53.114 lat (usec): min=133, max=5074, avg=168.40, stdev=93.23 00:20:53.114 clat percentiles (usec): 00:20:53.115 | 1.00th=[ 127], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:20:53.115 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:20:53.115 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 176], 00:20:53.115 | 99.00th=[ 196], 99.50th=[ 210], 99.90th=[ 343], 99.95th=[ 465], 00:20:53.115 | 99.99th=[ 5080] 00:20:53.115 bw ( KiB/s): min=12288, max=12288, per=30.60%, avg=12288.00, stdev= 0.00, samples=1 00:20:53.115 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:20:53.115 lat (usec) : 250=99.67%, 500=0.28%, 750=0.04% 00:20:53.115 lat (msec) : 10=0.02% 00:20:53.115 cpu : usr=1.20%, sys=5.70%, ctx=5441, majf=0, minf=13 00:20:53.115 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:53.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:53.115 issued rwts: total=2560,2881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:53.115 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:53.115 00:20:53.115 Run status group 0 (all jobs): 00:20:53.115 READ: bw=34.9MiB/s (36.6MB/s), 6801KiB/s-11.7MiB/s (6964kB/s-12.2MB/s), io=35.0MiB (36.7MB), run=1001-1001msec 00:20:53.115 WRITE: bw=39.2MiB/s (41.1MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.3MiB (41.2MB), run=1001-1001msec 00:20:53.115 00:20:53.115 Disk stats (read/write): 00:20:53.115 nvme0n1: ios=2610/2684, merge=0/0, ticks=466/363, in_queue=829, util=88.26% 00:20:53.115 nvme0n2: ios=1585/1705, merge=0/0, ticks=467/376, in_queue=843, util=89.79% 00:20:53.115 nvme0n3: ios=1557/1705, merge=0/0, ticks=455/380, in_queue=835, util=89.60% 00:20:53.115 nvme0n4: ios=2151/2560, merge=0/0, ticks=411/407, in_queue=818, util=89.86% 00:20:53.115 14:03:32 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:53.115 [global] 00:20:53.115 thread=1 00:20:53.115 invalidate=1 00:20:53.115 rw=write 00:20:53.115 time_based=1 00:20:53.115 runtime=1 00:20:53.115 ioengine=libaio 00:20:53.115 direct=1 00:20:53.115 bs=4096 00:20:53.115 iodepth=128 00:20:53.115 norandommap=0 00:20:53.115 numjobs=1 00:20:53.115 00:20:53.115 verify_dump=1 00:20:53.115 verify_backlog=512 00:20:53.115 verify_state_save=0 00:20:53.115 do_verify=1 00:20:53.115 verify=crc32c-intel 00:20:53.115 [job0] 00:20:53.115 filename=/dev/nvme0n1 00:20:53.115 [job1] 00:20:53.115 filename=/dev/nvme0n2 00:20:53.115 [job2] 00:20:53.115 filename=/dev/nvme0n3 00:20:53.115 [job3] 00:20:53.115 filename=/dev/nvme0n4 00:20:53.115 Could not set queue depth (nvme0n1) 00:20:53.115 Could not set queue depth (nvme0n2) 00:20:53.115 Could not set queue depth (nvme0n3) 00:20:53.115 Could not set queue depth (nvme0n4) 00:20:53.115 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:53.115 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:53.115 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:53.115 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:53.115 fio-3.35 00:20:53.115 Starting 4 threads 00:20:54.496 00:20:54.496 job0: (groupid=0, jobs=1): err= 0: pid=77721: Fri Apr 26 14:03:33 2024 00:20:54.496 read: IOPS=2081, BW=8327KiB/s (8527kB/s)(8360KiB/1004msec) 00:20:54.496 slat (usec): min=7, max=8599, avg=200.40, stdev=856.49 00:20:54.496 clat (usec): min=2747, max=47488, avg=25728.98, stdev=5492.49 00:20:54.496 lat (usec): min=6506, max=47517, avg=25929.38, stdev=5466.85 00:20:54.496 clat percentiles (usec): 00:20:54.496 | 1.00th=[11338], 5.00th=[19006], 10.00th=[19792], 20.00th=[21627], 00:20:54.496 | 30.00th=[22676], 40.00th=[23462], 50.00th=[25035], 60.00th=[26608], 00:20:54.496 | 70.00th=[28181], 80.00th=[30016], 90.00th=[32375], 95.00th=[35390], 00:20:54.496 | 99.00th=[40109], 99.50th=[43779], 99.90th=[47449], 99.95th=[47449], 00:20:54.496 | 99.99th=[47449] 00:20:54.496 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:20:54.496 slat (usec): min=10, max=14257, avg=212.93, stdev=968.87 00:20:54.496 clat (usec): min=13170, max=61578, avg=28176.41, stdev=11056.45 00:20:54.496 lat (usec): min=14516, max=61616, avg=28389.34, stdev=11117.30 00:20:54.496 clat percentiles (usec): 00:20:54.496 | 1.00th=[15270], 5.00th=[16712], 10.00th=[17695], 20.00th=[19006], 00:20:54.496 | 30.00th=[21103], 40.00th=[22414], 50.00th=[23987], 60.00th=[27395], 00:20:54.496 | 70.00th=[32113], 80.00th=[35914], 90.00th=[44303], 95.00th=[49021], 00:20:54.496 | 99.00th=[61604], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:20:54.496 | 99.99th=[61604] 00:20:54.496 bw ( KiB/s): min= 8744, max=11048, per=19.10%, avg=9896.00, stdev=1629.17, samples=2 00:20:54.496 iops : min= 2186, max= 2762, avg=2474.00, stdev=407.29, samples=2 00:20:54.496 lat (msec) : 4=0.02%, 10=0.22%, 20=17.72%, 50=79.33%, 100=2.71% 00:20:54.496 cpu : usr=2.79%, sys=11.07%, ctx=614, majf=0, minf=8 00:20:54.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:20:54.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:54.496 issued rwts: total=2090,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:54.496 job1: (groupid=0, jobs=1): err= 0: pid=77722: Fri Apr 26 14:03:33 2024 00:20:54.496 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:20:54.496 slat (usec): min=7, max=6177, avg=123.81, stdev=517.57 00:20:54.496 clat (usec): min=2024, max=43136, avg=15931.46, stdev=6841.85 00:20:54.496 lat (usec): min=2609, max=43162, avg=16055.27, stdev=6901.58 00:20:54.496 clat percentiles (usec): 00:20:54.496 | 1.00th=[ 4686], 5.00th=[10028], 10.00th=[10683], 20.00th=[11600], 00:20:54.496 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12649], 60.00th=[13173], 00:20:54.496 | 70.00th=[14091], 80.00th=[22938], 90.00th=[27395], 95.00th=[29492], 00:20:54.496 | 99.00th=[36963], 99.50th=[38536], 99.90th=[41681], 99.95th=[41681], 00:20:54.496 | 99.99th=[43254] 00:20:54.496 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:20:54.496 slat (usec): min=19, max=7325, avg=142.13, stdev=557.19 00:20:54.496 clat (usec): min=7204, max=59382, avg=19220.69, stdev=12691.50 00:20:54.496 lat (usec): min=7239, max=59445, avg=19362.82, stdev=12786.80 00:20:54.496 clat percentiles (usec): 00:20:54.496 | 1.00th=[ 8979], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[11338], 00:20:54.496 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12780], 60.00th=[15139], 00:20:54.496 | 70.00th=[20317], 80.00th=[22938], 90.00th=[45876], 95.00th=[50594], 00:20:54.496 | 99.00th=[56886], 99.50th=[57410], 99.90th=[58983], 99.95th=[59507], 00:20:54.496 | 99.99th=[59507] 00:20:54.496 bw ( KiB/s): min= 8208, max=20480, per=27.68%, avg=14344.00, stdev=8677.61, samples=2 00:20:54.496 iops : min= 2052, max= 5120, avg=3586.00, stdev=2169.40, samples=2 00:20:54.496 lat (msec) : 4=0.42%, 10=5.57%, 20=64.91%, 50=26.37%, 100=2.74% 00:20:54.496 cpu : usr=4.79%, sys=15.47%, ctx=653, majf=0, minf=11 00:20:54.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:20:54.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:54.496 issued rwts: total=3577,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:54.496 job2: (groupid=0, jobs=1): err= 0: pid=77723: Fri Apr 26 14:03:33 2024 00:20:54.496 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:20:54.496 slat (usec): min=7, max=17181, avg=171.79, stdev=882.71 00:20:54.496 clat (usec): min=9261, max=65328, avg=23591.43, stdev=12470.79 00:20:54.496 lat (usec): min=9292, max=65362, avg=23763.22, stdev=12535.91 00:20:54.496 clat percentiles (usec): 00:20:54.496 | 1.00th=[10290], 5.00th=[10814], 10.00th=[10945], 20.00th=[11731], 00:20:54.496 | 30.00th=[13042], 40.00th=[19268], 50.00th=[23462], 60.00th=[25560], 00:20:54.496 | 70.00th=[26608], 80.00th=[29492], 90.00th=[42206], 95.00th=[54264], 00:20:54.496 | 99.00th=[64750], 99.50th=[65274], 99.90th=[65274], 99.95th=[65274], 00:20:54.496 | 99.99th=[65274] 00:20:54.496 write: IOPS=3269, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1003msec); 0 zone resets 00:20:54.496 slat (usec): min=14, max=17269, avg=131.08, stdev=512.81 00:20:54.496 clat (usec): min=2228, max=28896, avg=16533.69, stdev=5488.93 00:20:54.496 lat (usec): min=2266, max=37569, avg=16664.77, stdev=5535.82 00:20:54.496 clat percentiles (usec): 00:20:54.496 | 1.00th=[ 8586], 5.00th=[10421], 10.00th=[10945], 20.00th=[11207], 00:20:54.496 | 30.00th=[11338], 40.00th=[12649], 50.00th=[15795], 60.00th=[18744], 00:20:54.496 | 70.00th=[20579], 80.00th=[22152], 90.00th=[24249], 95.00th=[25297], 00:20:54.496 | 99.00th=[27132], 99.50th=[27395], 99.90th=[27657], 99.95th=[27657], 00:20:54.496 | 99.99th=[28967] 00:20:54.496 bw ( KiB/s): min= 8792, max=16416, per=24.32%, avg=12604.00, stdev=5390.98, samples=2 00:20:54.496 iops : min= 2198, max= 4104, avg=3151.00, stdev=1347.75, samples=2 00:20:54.496 lat (msec) : 4=0.08%, 10=1.15%, 20=52.64%, 50=43.11%, 100=3.02% 00:20:54.496 cpu : usr=4.69%, sys=13.67%, ctx=826, majf=0, minf=9 00:20:54.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:20:54.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:54.496 issued rwts: total=3072,3279,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:54.496 job3: (groupid=0, jobs=1): err= 0: pid=77724: Fri Apr 26 14:03:33 2024 00:20:54.496 read: IOPS=3074, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:20:54.496 slat (usec): min=7, max=8276, avg=133.28, stdev=593.93 00:20:54.496 clat (usec): min=524, max=36291, avg=17691.69, stdev=6264.52 00:20:54.496 lat (usec): min=3390, max=36319, avg=17824.98, stdev=6321.27 00:20:54.496 clat percentiles (usec): 00:20:54.496 | 1.00th=[10814], 5.00th=[11600], 10.00th=[12518], 20.00th=[13435], 00:20:54.496 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14484], 60.00th=[15008], 00:20:54.496 | 70.00th=[19530], 80.00th=[24249], 90.00th=[28181], 95.00th=[30802], 00:20:54.496 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[35914], 00:20:54.496 | 99.99th=[36439] 00:20:54.496 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:20:54.496 slat (usec): min=15, max=8361, avg=152.43, stdev=602.66 00:20:54.496 clat (usec): min=3575, max=59404, avg=19960.06, stdev=12058.16 00:20:54.496 lat (usec): min=3610, max=59436, avg=20112.49, stdev=12147.74 00:20:54.496 clat percentiles (usec): 00:20:54.496 | 1.00th=[ 5014], 5.00th=[10552], 10.00th=[11076], 20.00th=[12780], 00:20:54.496 | 30.00th=[13435], 40.00th=[13960], 50.00th=[14353], 60.00th=[15401], 00:20:54.496 | 70.00th=[20317], 80.00th=[23462], 90.00th=[45351], 95.00th=[50070], 00:20:54.496 | 99.00th=[55837], 99.50th=[56361], 99.90th=[58459], 99.95th=[59507], 00:20:54.496 | 99.99th=[59507] 00:20:54.496 bw ( KiB/s): min= 8175, max= 8175, per=15.78%, avg=8175.00, stdev= 0.00, samples=1 00:20:54.496 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:20:54.496 lat (usec) : 750=0.02% 00:20:54.496 lat (msec) : 4=0.24%, 10=1.17%, 20=68.09%, 50=27.71%, 100=2.78% 00:20:54.496 cpu : usr=4.70%, sys=14.80%, ctx=550, majf=0, minf=13 00:20:54.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:20:54.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:54.496 issued rwts: total=3078,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:54.496 00:20:54.496 Run status group 0 (all jobs): 00:20:54.496 READ: bw=46.0MiB/s (48.2MB/s), 8327KiB/s-13.9MiB/s (8527kB/s-14.6MB/s), io=46.2MiB (48.4MB), run=1001-1004msec 00:20:54.496 WRITE: bw=50.6MiB/s (53.1MB/s), 9.96MiB/s-14.0MiB/s (10.4MB/s-14.7MB/s), io=50.8MiB (53.3MB), run=1001-1004msec 00:20:54.496 00:20:54.496 Disk stats (read/write): 00:20:54.496 nvme0n1: ios=1875/2048, merge=0/0, ticks=11143/13206, in_queue=24349, util=87.86% 00:20:54.496 nvme0n2: ios=2648/3072, merge=0/0, ticks=14266/17504, in_queue=31770, util=87.46% 00:20:54.496 nvme0n3: ios=2566/3072, merge=0/0, ticks=13260/10329, in_queue=23589, util=88.74% 00:20:54.496 nvme0n4: ios=2560/2758, merge=0/0, ticks=14311/17183, in_queue=31494, util=88.36% 00:20:54.496 14:03:33 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:54.496 [global] 00:20:54.496 thread=1 00:20:54.496 invalidate=1 00:20:54.496 rw=randwrite 00:20:54.496 time_based=1 00:20:54.496 runtime=1 00:20:54.496 ioengine=libaio 00:20:54.497 direct=1 00:20:54.497 bs=4096 00:20:54.497 iodepth=128 00:20:54.497 norandommap=0 00:20:54.497 numjobs=1 00:20:54.497 00:20:54.497 verify_dump=1 00:20:54.497 verify_backlog=512 00:20:54.497 verify_state_save=0 00:20:54.497 do_verify=1 00:20:54.497 verify=crc32c-intel 00:20:54.497 [job0] 00:20:54.497 filename=/dev/nvme0n1 00:20:54.497 [job1] 00:20:54.497 filename=/dev/nvme0n2 00:20:54.497 [job2] 00:20:54.497 filename=/dev/nvme0n3 00:20:54.497 [job3] 00:20:54.497 filename=/dev/nvme0n4 00:20:54.497 Could not set queue depth (nvme0n1) 00:20:54.497 Could not set queue depth (nvme0n2) 00:20:54.497 Could not set queue depth (nvme0n3) 00:20:54.497 Could not set queue depth (nvme0n4) 00:20:54.755 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:54.755 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:54.755 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:54.755 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:54.756 fio-3.35 00:20:54.756 Starting 4 threads 00:20:56.136 00:20:56.136 job0: (groupid=0, jobs=1): err= 0: pid=77784: Fri Apr 26 14:03:35 2024 00:20:56.136 read: IOPS=2154, BW=8616KiB/s (8823kB/s)(8668KiB/1006msec) 00:20:56.136 slat (usec): min=18, max=8741, avg=138.44, stdev=671.75 00:20:56.136 clat (usec): min=4948, max=62784, avg=15997.18, stdev=5460.13 00:20:56.136 lat (usec): min=4967, max=62806, avg=16135.62, stdev=5557.33 00:20:56.136 clat percentiles (usec): 00:20:56.136 | 1.00th=[ 7373], 5.00th=[12125], 10.00th=[12911], 20.00th=[14353], 00:20:56.136 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15270], 60.00th=[15401], 00:20:56.136 | 70.00th=[15795], 80.00th=[16581], 90.00th=[18220], 95.00th=[19530], 00:20:56.136 | 99.00th=[51119], 99.50th=[58459], 99.90th=[62653], 99.95th=[62653], 00:20:56.136 | 99.99th=[62653] 00:20:56.136 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:20:56.136 slat (usec): min=8, max=27192, avg=261.32, stdev=1360.70 00:20:56.136 clat (msec): min=9, max=129, avg=35.51, stdev=27.84 00:20:56.136 lat (msec): min=9, max=129, avg=35.77, stdev=28.00 00:20:56.136 clat percentiles (msec): 00:20:56.136 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 15], 00:20:56.136 | 30.00th=[ 19], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 26], 00:20:56.136 | 70.00th=[ 36], 80.00th=[ 53], 90.00th=[ 73], 95.00th=[ 105], 00:20:56.137 | 99.00th=[ 127], 99.50th=[ 128], 99.90th=[ 130], 99.95th=[ 130], 00:20:56.137 | 99.99th=[ 130] 00:20:56.137 bw ( KiB/s): min= 8208, max=12168, per=16.58%, avg=10188.00, stdev=2800.14, samples=2 00:20:56.137 iops : min= 2052, max= 3042, avg=2547.00, stdev=700.04, samples=2 00:20:56.137 lat (msec) : 10=1.54%, 20=59.59%, 50=26.93%, 100=8.44%, 250=3.49% 00:20:56.137 cpu : usr=3.08%, sys=10.75%, ctx=320, majf=0, minf=13 00:20:56.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:20:56.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:56.137 issued rwts: total=2167,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.137 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:56.137 job1: (groupid=0, jobs=1): err= 0: pid=77785: Fri Apr 26 14:03:35 2024 00:20:56.137 read: IOPS=3417, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1003msec) 00:20:56.137 slat (usec): min=9, max=12191, avg=119.03, stdev=685.26 00:20:56.137 clat (usec): min=1166, max=36107, avg=15430.89, stdev=4870.77 00:20:56.137 lat (usec): min=1188, max=36129, avg=15549.92, stdev=4909.46 00:20:56.137 clat percentiles (usec): 00:20:56.137 | 1.00th=[ 3326], 5.00th=[10814], 10.00th=[11731], 20.00th=[12780], 00:20:56.137 | 30.00th=[13304], 40.00th=[13698], 50.00th=[14353], 60.00th=[15008], 00:20:56.137 | 70.00th=[15533], 80.00th=[17433], 90.00th=[21103], 95.00th=[26084], 00:20:56.137 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:20:56.137 | 99.99th=[35914] 00:20:56.137 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:20:56.137 slat (usec): min=10, max=12704, avg=152.12, stdev=677.54 00:20:56.137 clat (usec): min=5154, max=61350, avg=20686.51, stdev=12709.34 00:20:56.137 lat (usec): min=5193, max=61373, avg=20838.63, stdev=12791.65 00:20:56.137 clat percentiles (usec): 00:20:56.137 | 1.00th=[ 8455], 5.00th=[10421], 10.00th=[10683], 20.00th=[11731], 00:20:56.137 | 30.00th=[11994], 40.00th=[12256], 50.00th=[14484], 60.00th=[17695], 00:20:56.137 | 70.00th=[24249], 80.00th=[25297], 90.00th=[42206], 95.00th=[51643], 00:20:56.137 | 99.00th=[57934], 99.50th=[58459], 99.90th=[61080], 99.95th=[61080], 00:20:56.137 | 99.99th=[61604] 00:20:56.137 bw ( KiB/s): min=13515, max=15184, per=23.35%, avg=14349.50, stdev=1180.16, samples=2 00:20:56.137 iops : min= 3378, max= 3796, avg=3587.00, stdev=295.57, samples=2 00:20:56.137 lat (msec) : 2=0.19%, 4=0.57%, 10=2.61%, 20=71.91%, 50=21.75% 00:20:56.137 lat (msec) : 100=2.98% 00:20:56.137 cpu : usr=4.89%, sys=13.67%, ctx=416, majf=0, minf=13 00:20:56.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:20:56.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:56.137 issued rwts: total=3428,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.137 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:56.137 job2: (groupid=0, jobs=1): err= 0: pid=77787: Fri Apr 26 14:03:35 2024 00:20:56.137 read: IOPS=4440, BW=17.3MiB/s (18.2MB/s)(17.4MiB/1002msec) 00:20:56.137 slat (usec): min=16, max=3333, avg=105.86, stdev=429.20 00:20:56.137 clat (usec): min=394, max=18773, avg=14159.35, stdev=1538.31 00:20:56.137 lat (usec): min=3559, max=19716, avg=14265.21, stdev=1500.63 00:20:56.137 clat percentiles (usec): 00:20:56.137 | 1.00th=[ 7767], 5.00th=[11994], 10.00th=[12518], 20.00th=[13304], 00:20:56.137 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14353], 60.00th=[14484], 00:20:56.137 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15664], 95.00th=[16319], 00:20:56.137 | 99.00th=[16581], 99.50th=[17433], 99.90th=[18744], 99.95th=[18744], 00:20:56.137 | 99.99th=[18744] 00:20:56.137 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:20:56.137 slat (usec): min=23, max=2905, avg=102.33, stdev=358.88 00:20:56.137 clat (usec): min=10933, max=18599, avg=13798.34, stdev=1283.20 00:20:56.137 lat (usec): min=10968, max=18634, avg=13900.67, stdev=1292.53 00:20:56.137 clat percentiles (usec): 00:20:56.137 | 1.00th=[11600], 5.00th=[11994], 10.00th=[12256], 20.00th=[12649], 00:20:56.137 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13566], 60.00th=[14222], 00:20:56.137 | 70.00th=[14615], 80.00th=[15139], 90.00th=[15533], 95.00th=[15664], 00:20:56.137 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18482], 99.95th=[18482], 00:20:56.137 | 99.99th=[18482] 00:20:56.137 bw ( KiB/s): min=17384, max=19480, per=30.00%, avg=18432.00, stdev=1482.10, samples=2 00:20:56.137 iops : min= 4346, max= 4870, avg=4608.00, stdev=370.52, samples=2 00:20:56.137 lat (usec) : 500=0.01% 00:20:56.137 lat (msec) : 4=0.17%, 10=0.54%, 20=99.28% 00:20:56.137 cpu : usr=5.99%, sys=19.88%, ctx=600, majf=0, minf=8 00:20:56.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:20:56.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:56.137 issued rwts: total=4449,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.137 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:56.137 job3: (groupid=0, jobs=1): err= 0: pid=77788: Fri Apr 26 14:03:35 2024 00:20:56.137 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:20:56.137 slat (usec): min=9, max=13920, avg=108.86, stdev=692.28 00:20:56.137 clat (usec): min=5614, max=28660, avg=14623.52, stdev=3406.40 00:20:56.137 lat (usec): min=5633, max=28684, avg=14732.37, stdev=3440.27 00:20:56.137 clat percentiles (usec): 00:20:56.137 | 1.00th=[ 8848], 5.00th=[10552], 10.00th=[10945], 20.00th=[12387], 00:20:56.137 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13698], 60.00th=[14484], 00:20:56.137 | 70.00th=[15008], 80.00th=[16581], 90.00th=[19268], 95.00th=[22414], 00:20:56.137 | 99.00th=[25297], 99.50th=[26346], 99.90th=[28705], 99.95th=[28705], 00:20:56.137 | 99.99th=[28705] 00:20:56.137 write: IOPS=4681, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1004msec); 0 zone resets 00:20:56.137 slat (usec): min=10, max=8171, avg=94.10, stdev=454.92 00:20:56.137 clat (usec): min=2159, max=28618, avg=12716.98, stdev=2378.38 00:20:56.137 lat (usec): min=5038, max=28632, avg=12811.09, stdev=2421.02 00:20:56.137 clat percentiles (usec): 00:20:56.137 | 1.00th=[ 5538], 5.00th=[ 7242], 10.00th=[ 9372], 20.00th=[11600], 00:20:56.137 | 30.00th=[12256], 40.00th=[12780], 50.00th=[13042], 60.00th=[13698], 00:20:56.137 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14746], 95.00th=[15533], 00:20:56.137 | 99.00th=[16057], 99.50th=[18482], 99.90th=[25297], 99.95th=[25560], 00:20:56.137 | 99.99th=[28705] 00:20:56.137 bw ( KiB/s): min=16440, max=20398, per=29.98%, avg=18419.00, stdev=2798.73, samples=2 00:20:56.137 iops : min= 4110, max= 5099, avg=4604.50, stdev=699.33, samples=2 00:20:56.137 lat (msec) : 4=0.01%, 10=6.74%, 20=88.92%, 50=4.33% 00:20:56.137 cpu : usr=6.58%, sys=17.45%, ctx=547, majf=0, minf=7 00:20:56.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:20:56.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:56.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:56.137 issued rwts: total=4608,4700,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:56.137 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:56.137 00:20:56.137 Run status group 0 (all jobs): 00:20:56.137 READ: bw=56.9MiB/s (59.7MB/s), 8616KiB/s-17.9MiB/s (8823kB/s-18.8MB/s), io=57.2MiB (60.0MB), run=1002-1006msec 00:20:56.137 WRITE: bw=60.0MiB/s (62.9MB/s), 9.94MiB/s-18.3MiB/s (10.4MB/s-19.2MB/s), io=60.4MiB (63.3MB), run=1002-1006msec 00:20:56.137 00:20:56.137 Disk stats (read/write): 00:20:56.137 nvme0n1: ios=1586/1935, merge=0/0, ticks=12819/37061, in_queue=49880, util=88.67% 00:20:56.137 nvme0n2: ios=2616/3072, merge=0/0, ticks=38390/64260, in_queue=102650, util=89.48% 00:20:56.137 nvme0n3: ios=3873/4096, merge=0/0, ticks=12338/11213, in_queue=23551, util=89.92% 00:20:56.137 nvme0n4: ios=4039/4096, merge=0/0, ticks=53018/47923, in_queue=100941, util=89.76% 00:20:56.137 14:03:35 -- target/fio.sh@55 -- # sync 00:20:56.137 14:03:35 -- target/fio.sh@59 -- # fio_pid=77804 00:20:56.137 14:03:35 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:56.137 14:03:35 -- target/fio.sh@61 -- # sleep 3 00:20:56.137 [global] 00:20:56.137 thread=1 00:20:56.137 invalidate=1 00:20:56.137 rw=read 00:20:56.137 time_based=1 00:20:56.137 runtime=10 00:20:56.137 ioengine=libaio 00:20:56.137 direct=1 00:20:56.137 bs=4096 00:20:56.137 iodepth=1 00:20:56.137 norandommap=1 00:20:56.137 numjobs=1 00:20:56.137 00:20:56.137 [job0] 00:20:56.137 filename=/dev/nvme0n1 00:20:56.137 [job1] 00:20:56.137 filename=/dev/nvme0n2 00:20:56.137 [job2] 00:20:56.137 filename=/dev/nvme0n3 00:20:56.137 [job3] 00:20:56.137 filename=/dev/nvme0n4 00:20:56.137 Could not set queue depth (nvme0n1) 00:20:56.137 Could not set queue depth (nvme0n2) 00:20:56.137 Could not set queue depth (nvme0n3) 00:20:56.137 Could not set queue depth (nvme0n4) 00:20:56.137 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:56.137 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:56.137 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:56.137 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:56.137 fio-3.35 00:20:56.137 Starting 4 threads 00:20:59.433 14:03:38 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:59.433 fio: pid=77848, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:59.433 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=62484480, buflen=4096 00:20:59.433 14:03:38 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:59.433 fio: pid=77847, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:59.433 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=58093568, buflen=4096 00:20:59.433 14:03:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:59.433 14:03:38 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:59.433 fio: pid=77845, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:59.433 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=64577536, buflen=4096 00:20:59.692 14:03:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:59.692 14:03:39 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:59.951 fio: pid=77846, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:20:59.951 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=8531968, buflen=4096 00:20:59.951 00:20:59.951 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77845: Fri Apr 26 14:03:39 2024 00:20:59.951 read: IOPS=4919, BW=19.2MiB/s (20.1MB/s)(61.6MiB/3205msec) 00:20:59.951 slat (usec): min=5, max=15385, avg=12.46, stdev=205.49 00:20:59.951 clat (usec): min=113, max=3662, avg=190.06, stdev=64.77 00:20:59.951 lat (usec): min=130, max=15576, avg=202.51, stdev=216.70 00:20:59.951 clat percentiles (usec): 00:20:59.951 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:20:59.951 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 172], 00:20:59.951 | 70.00th=[ 184], 80.00th=[ 249], 90.00th=[ 273], 95.00th=[ 285], 00:20:59.951 | 99.00th=[ 347], 99.50th=[ 375], 99.90th=[ 445], 99.95th=[ 644], 00:20:59.951 | 99.99th=[ 2507] 00:20:59.951 bw ( KiB/s): min=14728, max=23440, per=27.78%, avg=19748.50, stdev=4006.95, samples=6 00:20:59.951 iops : min= 3682, max= 5860, avg=4937.00, stdev=1001.92, samples=6 00:20:59.951 lat (usec) : 250=80.10%, 500=19.82%, 750=0.03%, 1000=0.01% 00:20:59.951 lat (msec) : 2=0.02%, 4=0.01% 00:20:59.951 cpu : usr=0.87%, sys=4.03%, ctx=15789, majf=0, minf=1 00:20:59.951 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.951 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.951 issued rwts: total=15767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.951 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:59.951 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77846: Fri Apr 26 14:03:39 2024 00:20:59.951 read: IOPS=5154, BW=20.1MiB/s (21.1MB/s)(72.1MiB/3583msec) 00:20:59.951 slat (usec): min=7, max=12810, avg=11.77, stdev=170.41 00:20:59.951 clat (usec): min=3, max=180117, avg=181.45, stdev=1324.64 00:20:59.951 lat (usec): min=129, max=180127, avg=193.23, stdev=1335.72 00:20:59.951 clat percentiles (usec): 00:20:59.951 | 1.00th=[ 133], 5.00th=[ 145], 10.00th=[ 153], 20.00th=[ 161], 00:20:59.951 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:20:59.952 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 198], 00:20:59.952 | 99.00th=[ 217], 99.50th=[ 229], 99.90th=[ 510], 99.95th=[ 734], 00:20:59.952 | 99.99th=[ 2507] 00:20:59.952 bw ( KiB/s): min=20694, max=22208, per=30.41%, avg=21616.33, stdev=552.83, samples=6 00:20:59.952 iops : min= 5173, max= 5552, avg=5404.00, stdev=138.37, samples=6 00:20:59.952 lat (usec) : 4=0.01%, 250=99.67%, 500=0.21%, 750=0.06%, 1000=0.02% 00:20:59.952 lat (msec) : 2=0.02%, 4=0.01%, 250=0.01% 00:20:59.952 cpu : usr=0.98%, sys=4.08%, ctx=18481, majf=0, minf=1 00:20:59.952 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.952 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.952 issued rwts: total=18468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.952 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:59.952 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77847: Fri Apr 26 14:03:39 2024 00:20:59.952 read: IOPS=4688, BW=18.3MiB/s (19.2MB/s)(55.4MiB/3025msec) 00:20:59.952 slat (usec): min=6, max=12772, avg=10.52, stdev=122.96 00:20:59.952 clat (usec): min=132, max=3470, avg=201.93, stdev=58.33 00:20:59.952 lat (usec): min=150, max=12952, avg=212.45, stdev=135.95 00:20:59.952 clat percentiles (usec): 00:20:59.952 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:20:59.952 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 186], 00:20:59.952 | 70.00th=[ 206], 80.00th=[ 255], 90.00th=[ 277], 95.00th=[ 285], 00:20:59.952 | 99.00th=[ 347], 99.50th=[ 367], 99.90th=[ 437], 99.95th=[ 553], 00:20:59.952 | 99.99th=[ 1942] 00:20:59.952 bw ( KiB/s): min=14720, max=21960, per=27.72%, avg=19704.40, stdev=3018.42, samples=5 00:20:59.952 iops : min= 3680, max= 5490, avg=4926.00, stdev=754.64, samples=5 00:20:59.952 lat (usec) : 250=77.79%, 500=22.14%, 750=0.02%, 1000=0.02% 00:20:59.952 lat (msec) : 2=0.01%, 4=0.01% 00:20:59.952 cpu : usr=0.60%, sys=3.97%, ctx=14199, majf=0, minf=1 00:20:59.952 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.952 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.952 issued rwts: total=14184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.952 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:59.952 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=77848: Fri Apr 26 14:03:39 2024 00:20:59.952 read: IOPS=5406, BW=21.1MiB/s (22.1MB/s)(59.6MiB/2822msec) 00:20:59.952 slat (nsec): min=8116, max=76321, avg=9591.30, stdev=2827.22 00:20:59.952 clat (usec): min=131, max=1880, avg=174.72, stdev=25.57 00:20:59.952 lat (usec): min=141, max=1898, avg=184.32, stdev=26.12 00:20:59.952 clat percentiles (usec): 00:20:59.952 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:20:59.952 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:20:59.952 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 192], 95.00th=[ 200], 00:20:59.952 | 99.00th=[ 219], 99.50th=[ 237], 99.90th=[ 416], 99.95th=[ 553], 00:20:59.952 | 99.99th=[ 1029] 00:20:59.952 bw ( KiB/s): min=20936, max=22464, per=30.34%, avg=21565.60, stdev=591.86, samples=5 00:20:59.952 iops : min= 5234, max= 5616, avg=5391.40, stdev=147.97, samples=5 00:20:59.952 lat (usec) : 250=99.55%, 500=0.39%, 750=0.03%, 1000=0.02% 00:20:59.952 lat (msec) : 2=0.01% 00:20:59.952 cpu : usr=0.92%, sys=4.40%, ctx=15257, majf=0, minf=2 00:20:59.952 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:59.952 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.952 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:59.952 issued rwts: total=15256,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:59.952 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:59.952 00:20:59.952 Run status group 0 (all jobs): 00:20:59.952 READ: bw=69.4MiB/s (72.8MB/s), 18.3MiB/s-21.1MiB/s (19.2MB/s-22.1MB/s), io=249MiB (261MB), run=2822-3583msec 00:20:59.952 00:20:59.952 Disk stats (read/write): 00:20:59.952 nvme0n1: ios=15248/0, merge=0/0, ticks=2924/0, in_queue=2924, util=94.70% 00:20:59.952 nvme0n2: ios=17291/0, merge=0/0, ticks=3010/0, in_queue=3010, util=95.75% 00:20:59.952 nvme0n3: ios=13696/0, merge=0/0, ticks=2754/0, in_queue=2754, util=96.47% 00:20:59.952 nvme0n4: ios=14144/0, merge=0/0, ticks=2494/0, in_queue=2494, util=96.47% 00:20:59.952 14:03:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:59.952 14:03:39 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:21:00.521 14:03:39 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:00.521 14:03:39 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:21:00.780 14:03:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:00.780 14:03:40 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:21:01.348 14:03:40 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:01.348 14:03:40 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:21:01.607 14:03:41 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:21:01.607 14:03:41 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:21:01.867 14:03:41 -- target/fio.sh@69 -- # fio_status=0 00:21:01.867 14:03:41 -- target/fio.sh@70 -- # wait 77804 00:21:01.867 14:03:41 -- target/fio.sh@70 -- # fio_status=4 00:21:01.867 14:03:41 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:01.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:01.867 14:03:41 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:01.867 14:03:41 -- common/autotest_common.sh@1205 -- # local i=0 00:21:01.867 14:03:41 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:21:01.867 14:03:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:02.125 14:03:41 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:21:02.125 14:03:41 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:02.125 nvmf hotplug test: fio failed as expected 00:21:02.125 14:03:41 -- common/autotest_common.sh@1217 -- # return 0 00:21:02.125 14:03:41 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:21:02.125 14:03:41 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:21:02.125 14:03:41 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:02.125 14:03:41 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:21:02.125 14:03:41 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:21:02.125 14:03:41 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:21:02.125 14:03:41 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:21:02.125 14:03:41 -- target/fio.sh@91 -- # nvmftestfini 00:21:02.125 14:03:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:02.125 14:03:41 -- nvmf/common.sh@117 -- # sync 00:21:02.125 14:03:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:02.125 14:03:41 -- nvmf/common.sh@120 -- # set +e 00:21:02.125 14:03:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:02.125 14:03:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:02.125 rmmod nvme_tcp 00:21:02.125 rmmod nvme_fabrics 00:21:02.384 rmmod nvme_keyring 00:21:02.384 14:03:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:02.384 14:03:41 -- nvmf/common.sh@124 -- # set -e 00:21:02.384 14:03:41 -- nvmf/common.sh@125 -- # return 0 00:21:02.384 14:03:41 -- nvmf/common.sh@478 -- # '[' -n 77310 ']' 00:21:02.384 14:03:41 -- nvmf/common.sh@479 -- # killprocess 77310 00:21:02.384 14:03:41 -- common/autotest_common.sh@936 -- # '[' -z 77310 ']' 00:21:02.384 14:03:41 -- common/autotest_common.sh@940 -- # kill -0 77310 00:21:02.384 14:03:41 -- common/autotest_common.sh@941 -- # uname 00:21:02.384 14:03:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:02.384 14:03:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77310 00:21:02.384 killing process with pid 77310 00:21:02.384 14:03:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:02.384 14:03:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:02.384 14:03:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77310' 00:21:02.384 14:03:41 -- common/autotest_common.sh@955 -- # kill 77310 00:21:02.384 14:03:41 -- common/autotest_common.sh@960 -- # wait 77310 00:21:03.783 14:03:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:03.783 14:03:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:03.783 14:03:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:03.783 14:03:43 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:03.783 14:03:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:03.783 14:03:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.783 14:03:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.783 14:03:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.783 14:03:43 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:03.783 00:21:03.783 real 0m21.452s 00:21:03.783 user 1m19.016s 00:21:03.783 sys 0m9.608s 00:21:03.783 14:03:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:03.783 ************************************ 00:21:03.783 END TEST nvmf_fio_target 00:21:03.783 ************************************ 00:21:03.783 14:03:43 -- common/autotest_common.sh@10 -- # set +x 00:21:03.783 14:03:43 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:03.783 14:03:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:03.783 14:03:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:03.783 14:03:43 -- common/autotest_common.sh@10 -- # set +x 00:21:03.783 ************************************ 00:21:03.783 START TEST nvmf_bdevio 00:21:03.783 ************************************ 00:21:03.783 14:03:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:21:04.043 * Looking for test storage... 00:21:04.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:04.043 14:03:43 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:04.043 14:03:43 -- nvmf/common.sh@7 -- # uname -s 00:21:04.043 14:03:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.043 14:03:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.043 14:03:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.043 14:03:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.043 14:03:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.043 14:03:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.043 14:03:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.043 14:03:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.043 14:03:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.043 14:03:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.043 14:03:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:21:04.043 14:03:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:21:04.043 14:03:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.043 14:03:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.043 14:03:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:04.043 14:03:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.043 14:03:43 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:04.043 14:03:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.043 14:03:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.043 14:03:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.043 14:03:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.044 14:03:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.044 14:03:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.044 14:03:43 -- paths/export.sh@5 -- # export PATH 00:21:04.044 14:03:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.044 14:03:43 -- nvmf/common.sh@47 -- # : 0 00:21:04.044 14:03:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:04.044 14:03:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:04.044 14:03:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.044 14:03:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.044 14:03:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.044 14:03:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:04.044 14:03:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:04.044 14:03:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:04.044 14:03:43 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:04.044 14:03:43 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:04.044 14:03:43 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:04.044 14:03:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:04.044 14:03:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.044 14:03:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:04.044 14:03:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:04.044 14:03:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:04.044 14:03:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.044 14:03:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:04.044 14:03:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.044 14:03:43 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:04.044 14:03:43 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:04.044 14:03:43 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:04.044 14:03:43 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:04.044 14:03:43 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:04.044 14:03:43 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:04.044 14:03:43 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.044 14:03:43 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:04.044 14:03:43 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:04.044 14:03:43 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:04.044 14:03:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:04.044 14:03:43 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:04.044 14:03:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:04.044 14:03:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.044 14:03:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:04.044 14:03:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:04.044 14:03:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:04.044 14:03:43 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:04.044 14:03:43 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:04.044 14:03:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:04.044 Cannot find device "nvmf_tgt_br" 00:21:04.044 14:03:43 -- nvmf/common.sh@155 -- # true 00:21:04.044 14:03:43 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:04.044 Cannot find device "nvmf_tgt_br2" 00:21:04.044 14:03:43 -- nvmf/common.sh@156 -- # true 00:21:04.044 14:03:43 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:04.044 14:03:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:04.044 Cannot find device "nvmf_tgt_br" 00:21:04.044 14:03:43 -- nvmf/common.sh@158 -- # true 00:21:04.044 14:03:43 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:04.044 Cannot find device "nvmf_tgt_br2" 00:21:04.044 14:03:43 -- nvmf/common.sh@159 -- # true 00:21:04.044 14:03:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:04.044 14:03:43 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:04.044 14:03:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:04.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:04.044 14:03:43 -- nvmf/common.sh@162 -- # true 00:21:04.044 14:03:43 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:04.044 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:04.044 14:03:43 -- nvmf/common.sh@163 -- # true 00:21:04.044 14:03:43 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:04.044 14:03:43 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:04.044 14:03:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:04.302 14:03:43 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:04.302 14:03:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:04.302 14:03:43 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:04.302 14:03:43 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:04.302 14:03:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:04.302 14:03:43 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:04.302 14:03:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:04.302 14:03:43 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:04.302 14:03:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:04.302 14:03:43 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:04.302 14:03:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:04.302 14:03:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:04.302 14:03:43 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:04.302 14:03:43 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:04.302 14:03:43 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:04.302 14:03:43 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:04.302 14:03:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:04.302 14:03:43 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:04.302 14:03:43 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:04.302 14:03:43 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:04.302 14:03:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:04.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:21:04.302 00:21:04.302 --- 10.0.0.2 ping statistics --- 00:21:04.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.302 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:21:04.302 14:03:43 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:04.302 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:04.302 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.126 ms 00:21:04.302 00:21:04.302 --- 10.0.0.3 ping statistics --- 00:21:04.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.302 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:21:04.302 14:03:43 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:04.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:21:04.302 00:21:04.302 --- 10.0.0.1 ping statistics --- 00:21:04.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.302 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:21:04.302 14:03:43 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.302 14:03:43 -- nvmf/common.sh@422 -- # return 0 00:21:04.302 14:03:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:04.302 14:03:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.302 14:03:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:04.302 14:03:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:04.302 14:03:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.302 14:03:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:04.302 14:03:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:04.302 14:03:43 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:04.302 14:03:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:04.302 14:03:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:04.302 14:03:43 -- common/autotest_common.sh@10 -- # set +x 00:21:04.562 14:03:43 -- nvmf/common.sh@470 -- # nvmfpid=78200 00:21:04.562 14:03:43 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:21:04.562 14:03:43 -- nvmf/common.sh@471 -- # waitforlisten 78200 00:21:04.562 14:03:43 -- common/autotest_common.sh@817 -- # '[' -z 78200 ']' 00:21:04.562 14:03:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.562 14:03:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:04.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.562 14:03:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.562 14:03:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:04.562 14:03:43 -- common/autotest_common.sh@10 -- # set +x 00:21:04.562 [2024-04-26 14:03:44.085696] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:04.562 [2024-04-26 14:03:44.085853] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.830 [2024-04-26 14:03:44.262356] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:05.088 [2024-04-26 14:03:44.522431] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.088 [2024-04-26 14:03:44.522508] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.088 [2024-04-26 14:03:44.522543] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.088 [2024-04-26 14:03:44.522555] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.088 [2024-04-26 14:03:44.522569] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.088 [2024-04-26 14:03:44.523086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:05.088 [2024-04-26 14:03:44.523904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:05.088 [2024-04-26 14:03:44.524194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:05.088 [2024-04-26 14:03:44.524231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:05.346 14:03:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:05.346 14:03:44 -- common/autotest_common.sh@850 -- # return 0 00:21:05.346 14:03:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:05.346 14:03:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:05.346 14:03:44 -- common/autotest_common.sh@10 -- # set +x 00:21:05.604 14:03:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.604 14:03:45 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:05.604 14:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.604 14:03:45 -- common/autotest_common.sh@10 -- # set +x 00:21:05.604 [2024-04-26 14:03:45.040954] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.604 14:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.604 14:03:45 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:05.604 14:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.604 14:03:45 -- common/autotest_common.sh@10 -- # set +x 00:21:05.604 Malloc0 00:21:05.604 14:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.604 14:03:45 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:05.604 14:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.604 14:03:45 -- common/autotest_common.sh@10 -- # set +x 00:21:05.604 14:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.604 14:03:45 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:05.604 14:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.604 14:03:45 -- common/autotest_common.sh@10 -- # set +x 00:21:05.604 14:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.604 14:03:45 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:05.604 14:03:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:05.604 14:03:45 -- common/autotest_common.sh@10 -- # set +x 00:21:05.604 [2024-04-26 14:03:45.188724] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.604 14:03:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:05.604 14:03:45 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:21:05.604 14:03:45 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:05.604 14:03:45 -- nvmf/common.sh@521 -- # config=() 00:21:05.604 14:03:45 -- nvmf/common.sh@521 -- # local subsystem config 00:21:05.604 14:03:45 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:05.604 14:03:45 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:05.604 { 00:21:05.604 "params": { 00:21:05.604 "name": "Nvme$subsystem", 00:21:05.604 "trtype": "$TEST_TRANSPORT", 00:21:05.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:05.604 "adrfam": "ipv4", 00:21:05.604 "trsvcid": "$NVMF_PORT", 00:21:05.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:05.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:05.604 "hdgst": ${hdgst:-false}, 00:21:05.604 "ddgst": ${ddgst:-false} 00:21:05.604 }, 00:21:05.604 "method": "bdev_nvme_attach_controller" 00:21:05.604 } 00:21:05.604 EOF 00:21:05.604 )") 00:21:05.604 14:03:45 -- nvmf/common.sh@543 -- # cat 00:21:05.604 14:03:45 -- nvmf/common.sh@545 -- # jq . 00:21:05.604 14:03:45 -- nvmf/common.sh@546 -- # IFS=, 00:21:05.604 14:03:45 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:05.604 "params": { 00:21:05.604 "name": "Nvme1", 00:21:05.604 "trtype": "tcp", 00:21:05.604 "traddr": "10.0.0.2", 00:21:05.604 "adrfam": "ipv4", 00:21:05.604 "trsvcid": "4420", 00:21:05.604 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.604 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.604 "hdgst": false, 00:21:05.604 "ddgst": false 00:21:05.604 }, 00:21:05.604 "method": "bdev_nvme_attach_controller" 00:21:05.604 }' 00:21:05.862 [2024-04-26 14:03:45.287660] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:05.862 [2024-04-26 14:03:45.287777] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78254 ] 00:21:05.862 [2024-04-26 14:03:45.461397] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:06.119 [2024-04-26 14:03:45.722575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.119 [2024-04-26 14:03:45.722623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.119 [2024-04-26 14:03:45.722642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.686 I/O targets: 00:21:06.686 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:06.686 00:21:06.686 00:21:06.686 CUnit - A unit testing framework for C - Version 2.1-3 00:21:06.686 http://cunit.sourceforge.net/ 00:21:06.686 00:21:06.686 00:21:06.686 Suite: bdevio tests on: Nvme1n1 00:21:06.686 Test: blockdev write read block ...passed 00:21:06.686 Test: blockdev write zeroes read block ...passed 00:21:06.686 Test: blockdev write zeroes read no split ...passed 00:21:06.944 Test: blockdev write zeroes read split ...passed 00:21:06.944 Test: blockdev write zeroes read split partial ...passed 00:21:06.944 Test: blockdev reset ...[2024-04-26 14:03:46.415186] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:06.944 [2024-04-26 14:03:46.415339] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:21:06.944 [2024-04-26 14:03:46.436719] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:06.944 passed 00:21:06.944 Test: blockdev write read 8 blocks ...passed 00:21:06.944 Test: blockdev write read size > 128k ...passed 00:21:06.944 Test: blockdev write read invalid size ...passed 00:21:06.944 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:06.944 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:06.944 Test: blockdev write read max offset ...passed 00:21:06.944 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:06.944 Test: blockdev writev readv 8 blocks ...passed 00:21:06.944 Test: blockdev writev readv 30 x 1block ...passed 00:21:06.944 Test: blockdev writev readv block ...passed 00:21:06.944 Test: blockdev writev readv size > 128k ...passed 00:21:06.944 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:06.944 Test: blockdev comparev and writev ...[2024-04-26 14:03:46.612871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.944 [2024-04-26 14:03:46.612934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:06.944 [2024-04-26 14:03:46.612959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.944 [2024-04-26 14:03:46.612972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:06.944 [2024-04-26 14:03:46.613445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.944 [2024-04-26 14:03:46.613472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:06.944 [2024-04-26 14:03:46.613503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.944 [2024-04-26 14:03:46.613516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:06.944 [2024-04-26 14:03:46.613927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.944 [2024-04-26 14:03:46.613953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:06.944 [2024-04-26 14:03:46.613973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.944 [2024-04-26 14:03:46.613985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:06.944 [2024-04-26 14:03:46.614391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.944 [2024-04-26 14:03:46.614415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:06.944 [2024-04-26 14:03:46.614439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:06.944 [2024-04-26 14:03:46.614453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:07.203 passed 00:21:07.203 Test: blockdev nvme passthru rw ...passed 00:21:07.203 Test: blockdev nvme passthru vendor specific ...[2024-04-26 14:03:46.697714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:07.203 [2024-04-26 14:03:46.697791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:07.203 [2024-04-26 14:03:46.697945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:07.203 [2024-04-26 14:03:46.697966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:07.203 [2024-04-26 14:03:46.698131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:07.203 [2024-04-26 14:03:46.698166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:07.203 [2024-04-26 14:03:46.698289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:07.203 [2024-04-26 14:03:46.698312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:07.203 passed 00:21:07.203 Test: blockdev nvme admin passthru ...passed 00:21:07.203 Test: blockdev copy ...passed 00:21:07.203 00:21:07.203 Run Summary: Type Total Ran Passed Failed Inactive 00:21:07.203 suites 1 1 n/a 0 0 00:21:07.203 tests 23 23 23 0 0 00:21:07.203 asserts 152 152 152 0 n/a 00:21:07.203 00:21:07.203 Elapsed time = 1.154 seconds 00:21:08.575 14:03:48 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:08.575 14:03:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.575 14:03:48 -- common/autotest_common.sh@10 -- # set +x 00:21:08.575 14:03:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:08.575 14:03:48 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:08.575 14:03:48 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:08.575 14:03:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:08.575 14:03:48 -- nvmf/common.sh@117 -- # sync 00:21:08.575 14:03:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:08.575 14:03:48 -- nvmf/common.sh@120 -- # set +e 00:21:08.575 14:03:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:08.575 14:03:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:08.575 rmmod nvme_tcp 00:21:08.575 rmmod nvme_fabrics 00:21:08.575 rmmod nvme_keyring 00:21:08.575 14:03:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:08.575 14:03:48 -- nvmf/common.sh@124 -- # set -e 00:21:08.575 14:03:48 -- nvmf/common.sh@125 -- # return 0 00:21:08.575 14:03:48 -- nvmf/common.sh@478 -- # '[' -n 78200 ']' 00:21:08.575 14:03:48 -- nvmf/common.sh@479 -- # killprocess 78200 00:21:08.575 14:03:48 -- common/autotest_common.sh@936 -- # '[' -z 78200 ']' 00:21:08.575 14:03:48 -- common/autotest_common.sh@940 -- # kill -0 78200 00:21:08.575 14:03:48 -- common/autotest_common.sh@941 -- # uname 00:21:08.575 14:03:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:08.575 14:03:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78200 00:21:08.833 14:03:48 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:21:08.833 14:03:48 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:21:08.833 killing process with pid 78200 00:21:08.833 14:03:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78200' 00:21:08.833 14:03:48 -- common/autotest_common.sh@955 -- # kill 78200 00:21:08.833 14:03:48 -- common/autotest_common.sh@960 -- # wait 78200 00:21:10.208 14:03:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:10.208 14:03:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:10.208 14:03:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:10.208 14:03:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:10.208 14:03:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:10.208 14:03:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.208 14:03:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.208 14:03:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.208 14:03:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:10.208 ************************************ 00:21:10.208 END TEST nvmf_bdevio 00:21:10.208 ************************************ 00:21:10.208 00:21:10.208 real 0m6.493s 00:21:10.208 user 0m25.611s 00:21:10.208 sys 0m1.275s 00:21:10.208 14:03:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:10.208 14:03:49 -- common/autotest_common.sh@10 -- # set +x 00:21:10.466 14:03:49 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:21:10.466 14:03:49 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:10.466 14:03:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:21:10.466 14:03:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:10.466 14:03:49 -- common/autotest_common.sh@10 -- # set +x 00:21:10.466 ************************************ 00:21:10.466 START TEST nvmf_bdevio_no_huge 00:21:10.466 ************************************ 00:21:10.466 14:03:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:10.725 * Looking for test storage... 00:21:10.725 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:10.725 14:03:50 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:10.725 14:03:50 -- nvmf/common.sh@7 -- # uname -s 00:21:10.725 14:03:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.725 14:03:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.725 14:03:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.725 14:03:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.725 14:03:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.725 14:03:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.725 14:03:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.725 14:03:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.725 14:03:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.725 14:03:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.725 14:03:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:21:10.725 14:03:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:21:10.725 14:03:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.725 14:03:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.725 14:03:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:10.725 14:03:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.725 14:03:50 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:10.725 14:03:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.725 14:03:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.725 14:03:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.725 14:03:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.725 14:03:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.725 14:03:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.725 14:03:50 -- paths/export.sh@5 -- # export PATH 00:21:10.725 14:03:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.725 14:03:50 -- nvmf/common.sh@47 -- # : 0 00:21:10.725 14:03:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:10.725 14:03:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:10.725 14:03:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.725 14:03:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.725 14:03:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.725 14:03:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:10.725 14:03:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:10.725 14:03:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:10.725 14:03:50 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:10.725 14:03:50 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:10.725 14:03:50 -- target/bdevio.sh@14 -- # nvmftestinit 00:21:10.725 14:03:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:10.725 14:03:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.725 14:03:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:10.725 14:03:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:10.725 14:03:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:10.725 14:03:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.725 14:03:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.725 14:03:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.725 14:03:50 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:10.725 14:03:50 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:10.725 14:03:50 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:10.725 14:03:50 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:10.725 14:03:50 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:10.725 14:03:50 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:10.725 14:03:50 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.725 14:03:50 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.725 14:03:50 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:10.725 14:03:50 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:10.725 14:03:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:10.726 14:03:50 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:10.726 14:03:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:10.726 14:03:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.726 14:03:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:10.726 14:03:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:10.726 14:03:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:10.726 14:03:50 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:10.726 14:03:50 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:10.726 14:03:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:10.726 Cannot find device "nvmf_tgt_br" 00:21:10.726 14:03:50 -- nvmf/common.sh@155 -- # true 00:21:10.726 14:03:50 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:10.726 Cannot find device "nvmf_tgt_br2" 00:21:10.726 14:03:50 -- nvmf/common.sh@156 -- # true 00:21:10.726 14:03:50 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:10.726 14:03:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:10.726 Cannot find device "nvmf_tgt_br" 00:21:10.726 14:03:50 -- nvmf/common.sh@158 -- # true 00:21:10.726 14:03:50 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:10.726 Cannot find device "nvmf_tgt_br2" 00:21:10.726 14:03:50 -- nvmf/common.sh@159 -- # true 00:21:10.726 14:03:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:10.726 14:03:50 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:10.726 14:03:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:10.726 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:10.726 14:03:50 -- nvmf/common.sh@162 -- # true 00:21:10.726 14:03:50 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:10.726 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:10.726 14:03:50 -- nvmf/common.sh@163 -- # true 00:21:10.726 14:03:50 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:10.984 14:03:50 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:10.984 14:03:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:10.984 14:03:50 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:10.984 14:03:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:10.984 14:03:50 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:10.984 14:03:50 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:10.984 14:03:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:10.984 14:03:50 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:10.984 14:03:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:10.984 14:03:50 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:10.984 14:03:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:10.984 14:03:50 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:10.984 14:03:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:10.984 14:03:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:10.984 14:03:50 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:10.984 14:03:50 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:10.984 14:03:50 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:10.984 14:03:50 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:10.985 14:03:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:10.985 14:03:50 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:10.985 14:03:50 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:10.985 14:03:50 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:10.985 14:03:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:10.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:21:10.985 00:21:10.985 --- 10.0.0.2 ping statistics --- 00:21:10.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.985 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:21:10.985 14:03:50 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:10.985 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:10.985 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:21:10.985 00:21:10.985 --- 10.0.0.3 ping statistics --- 00:21:10.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.985 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:21:10.985 14:03:50 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:10.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:21:10.985 00:21:10.985 --- 10.0.0.1 ping statistics --- 00:21:10.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.985 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:21:10.985 14:03:50 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.985 14:03:50 -- nvmf/common.sh@422 -- # return 0 00:21:10.985 14:03:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:10.985 14:03:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.985 14:03:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:10.985 14:03:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:10.985 14:03:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.985 14:03:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:10.985 14:03:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:10.985 14:03:50 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:10.985 14:03:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:10.985 14:03:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:10.985 14:03:50 -- common/autotest_common.sh@10 -- # set +x 00:21:10.985 14:03:50 -- nvmf/common.sh@470 -- # nvmfpid=78503 00:21:10.985 14:03:50 -- nvmf/common.sh@471 -- # waitforlisten 78503 00:21:10.985 14:03:50 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:10.985 14:03:50 -- common/autotest_common.sh@817 -- # '[' -z 78503 ']' 00:21:10.985 14:03:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.985 14:03:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:10.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.985 14:03:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.985 14:03:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:10.985 14:03:50 -- common/autotest_common.sh@10 -- # set +x 00:21:11.243 [2024-04-26 14:03:50.726030] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:11.243 [2024-04-26 14:03:50.726184] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:11.501 [2024-04-26 14:03:50.918290] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:11.760 [2024-04-26 14:03:51.190804] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.760 [2024-04-26 14:03:51.190861] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.760 [2024-04-26 14:03:51.190875] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.760 [2024-04-26 14:03:51.190888] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.760 [2024-04-26 14:03:51.190898] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.760 [2024-04-26 14:03:51.191908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:11.760 [2024-04-26 14:03:51.192030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:11.760 [2024-04-26 14:03:51.192444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.760 [2024-04-26 14:03:51.192453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:12.019 14:03:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:12.019 14:03:51 -- common/autotest_common.sh@850 -- # return 0 00:21:12.019 14:03:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:12.019 14:03:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:12.019 14:03:51 -- common/autotest_common.sh@10 -- # set +x 00:21:12.019 14:03:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.019 14:03:51 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:12.019 14:03:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.019 14:03:51 -- common/autotest_common.sh@10 -- # set +x 00:21:12.278 [2024-04-26 14:03:51.701548] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.279 14:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.279 14:03:51 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:12.279 14:03:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.279 14:03:51 -- common/autotest_common.sh@10 -- # set +x 00:21:12.279 Malloc0 00:21:12.279 14:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.279 14:03:51 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:12.279 14:03:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.279 14:03:51 -- common/autotest_common.sh@10 -- # set +x 00:21:12.279 14:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.279 14:03:51 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:12.279 14:03:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.279 14:03:51 -- common/autotest_common.sh@10 -- # set +x 00:21:12.279 14:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.279 14:03:51 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:12.279 14:03:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.279 14:03:51 -- common/autotest_common.sh@10 -- # set +x 00:21:12.279 [2024-04-26 14:03:51.826029] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.279 14:03:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.279 14:03:51 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:12.279 14:03:51 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:12.279 14:03:51 -- nvmf/common.sh@521 -- # config=() 00:21:12.279 14:03:51 -- nvmf/common.sh@521 -- # local subsystem config 00:21:12.279 14:03:51 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:12.279 14:03:51 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:12.279 { 00:21:12.279 "params": { 00:21:12.279 "name": "Nvme$subsystem", 00:21:12.279 "trtype": "$TEST_TRANSPORT", 00:21:12.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:12.279 "adrfam": "ipv4", 00:21:12.279 "trsvcid": "$NVMF_PORT", 00:21:12.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:12.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:12.279 "hdgst": ${hdgst:-false}, 00:21:12.279 "ddgst": ${ddgst:-false} 00:21:12.279 }, 00:21:12.279 "method": "bdev_nvme_attach_controller" 00:21:12.279 } 00:21:12.279 EOF 00:21:12.279 )") 00:21:12.279 14:03:51 -- nvmf/common.sh@543 -- # cat 00:21:12.279 14:03:51 -- nvmf/common.sh@545 -- # jq . 00:21:12.279 14:03:51 -- nvmf/common.sh@546 -- # IFS=, 00:21:12.279 14:03:51 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:12.279 "params": { 00:21:12.279 "name": "Nvme1", 00:21:12.279 "trtype": "tcp", 00:21:12.279 "traddr": "10.0.0.2", 00:21:12.279 "adrfam": "ipv4", 00:21:12.279 "trsvcid": "4420", 00:21:12.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:12.279 "hdgst": false, 00:21:12.279 "ddgst": false 00:21:12.279 }, 00:21:12.279 "method": "bdev_nvme_attach_controller" 00:21:12.279 }' 00:21:12.279 [2024-04-26 14:03:51.930465] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:12.279 [2024-04-26 14:03:51.930629] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid78557 ] 00:21:12.537 [2024-04-26 14:03:52.140321] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:12.795 [2024-04-26 14:03:52.432650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.795 [2024-04-26 14:03:52.432752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.795 [2024-04-26 14:03:52.432784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.362 I/O targets: 00:21:13.362 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:13.362 00:21:13.362 00:21:13.362 CUnit - A unit testing framework for C - Version 2.1-3 00:21:13.362 http://cunit.sourceforge.net/ 00:21:13.362 00:21:13.362 00:21:13.362 Suite: bdevio tests on: Nvme1n1 00:21:13.362 Test: blockdev write read block ...passed 00:21:13.362 Test: blockdev write zeroes read block ...passed 00:21:13.621 Test: blockdev write zeroes read no split ...passed 00:21:13.621 Test: blockdev write zeroes read split ...passed 00:21:13.621 Test: blockdev write zeroes read split partial ...passed 00:21:13.621 Test: blockdev reset ...[2024-04-26 14:03:53.106773] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:13.621 [2024-04-26 14:03:53.106917] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:21:13.621 [2024-04-26 14:03:53.123582] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:13.621 passed 00:21:13.621 Test: blockdev write read 8 blocks ...passed 00:21:13.621 Test: blockdev write read size > 128k ...passed 00:21:13.621 Test: blockdev write read invalid size ...passed 00:21:13.621 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:13.621 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:13.621 Test: blockdev write read max offset ...passed 00:21:13.621 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:13.621 Test: blockdev writev readv 8 blocks ...passed 00:21:13.621 Test: blockdev writev readv 30 x 1block ...passed 00:21:13.880 Test: blockdev writev readv block ...passed 00:21:13.880 Test: blockdev writev readv size > 128k ...passed 00:21:13.880 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:13.880 Test: blockdev comparev and writev ...[2024-04-26 14:03:53.301456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.880 [2024-04-26 14:03:53.301544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:13.880 [2024-04-26 14:03:53.301573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.880 [2024-04-26 14:03:53.301594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:13.880 [2024-04-26 14:03:53.302115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.880 [2024-04-26 14:03:53.302149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:13.880 [2024-04-26 14:03:53.302181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.880 [2024-04-26 14:03:53.302194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:13.880 [2024-04-26 14:03:53.302663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.880 [2024-04-26 14:03:53.302693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:13.880 [2024-04-26 14:03:53.302719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.880 [2024-04-26 14:03:53.302733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:13.880 [2024-04-26 14:03:53.303249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.880 [2024-04-26 14:03:53.303289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:13.880 [2024-04-26 14:03:53.303318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:13.880 [2024-04-26 14:03:53.303331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:13.880 passed 00:21:13.880 Test: blockdev nvme passthru rw ...passed 00:21:13.880 Test: blockdev nvme passthru vendor specific ...[2024-04-26 14:03:53.386783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:13.880 [2024-04-26 14:03:53.386841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:13.880 [2024-04-26 14:03:53.387207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:13.880 [2024-04-26 14:03:53.387237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:13.880 [2024-04-26 14:03:53.387430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:13.880 [2024-04-26 14:03:53.387454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:13.880 [2024-04-26 14:03:53.387616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:13.880 [2024-04-26 14:03:53.387644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:13.880 passed 00:21:13.880 Test: blockdev nvme admin passthru ...passed 00:21:13.880 Test: blockdev copy ...passed 00:21:13.880 00:21:13.880 Run Summary: Type Total Ran Passed Failed Inactive 00:21:13.880 suites 1 1 n/a 0 0 00:21:13.880 tests 23 23 23 0 0 00:21:13.880 asserts 152 152 152 0 n/a 00:21:13.880 00:21:13.880 Elapsed time = 1.096 seconds 00:21:14.816 14:03:54 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:14.816 14:03:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.816 14:03:54 -- common/autotest_common.sh@10 -- # set +x 00:21:14.816 14:03:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.816 14:03:54 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:14.816 14:03:54 -- target/bdevio.sh@30 -- # nvmftestfini 00:21:14.816 14:03:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:14.816 14:03:54 -- nvmf/common.sh@117 -- # sync 00:21:14.816 14:03:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:14.816 14:03:54 -- nvmf/common.sh@120 -- # set +e 00:21:14.816 14:03:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:14.816 14:03:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:14.816 rmmod nvme_tcp 00:21:14.816 rmmod nvme_fabrics 00:21:14.816 rmmod nvme_keyring 00:21:14.816 14:03:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:14.816 14:03:54 -- nvmf/common.sh@124 -- # set -e 00:21:14.816 14:03:54 -- nvmf/common.sh@125 -- # return 0 00:21:14.816 14:03:54 -- nvmf/common.sh@478 -- # '[' -n 78503 ']' 00:21:14.816 14:03:54 -- nvmf/common.sh@479 -- # killprocess 78503 00:21:14.816 14:03:54 -- common/autotest_common.sh@936 -- # '[' -z 78503 ']' 00:21:14.816 14:03:54 -- common/autotest_common.sh@940 -- # kill -0 78503 00:21:14.816 14:03:54 -- common/autotest_common.sh@941 -- # uname 00:21:14.816 14:03:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:14.816 14:03:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78503 00:21:14.816 14:03:54 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:21:14.816 14:03:54 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:21:14.816 killing process with pid 78503 00:21:14.816 14:03:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78503' 00:21:14.816 14:03:54 -- common/autotest_common.sh@955 -- # kill 78503 00:21:14.816 14:03:54 -- common/autotest_common.sh@960 -- # wait 78503 00:21:15.749 14:03:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:15.749 14:03:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:15.749 14:03:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:15.749 14:03:55 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:15.749 14:03:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:15.749 14:03:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.749 14:03:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:15.749 14:03:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.749 14:03:55 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:15.749 00:21:15.749 real 0m5.335s 00:21:15.749 user 0m19.705s 00:21:15.749 sys 0m1.783s 00:21:15.749 14:03:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:15.749 14:03:55 -- common/autotest_common.sh@10 -- # set +x 00:21:15.749 ************************************ 00:21:15.749 END TEST nvmf_bdevio_no_huge 00:21:15.749 ************************************ 00:21:16.006 14:03:55 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:16.006 14:03:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:16.006 14:03:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:16.006 14:03:55 -- common/autotest_common.sh@10 -- # set +x 00:21:16.006 ************************************ 00:21:16.006 START TEST nvmf_tls 00:21:16.006 ************************************ 00:21:16.006 14:03:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:16.006 * Looking for test storage... 00:21:16.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:16.006 14:03:55 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:16.006 14:03:55 -- nvmf/common.sh@7 -- # uname -s 00:21:16.006 14:03:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.006 14:03:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.006 14:03:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.006 14:03:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.006 14:03:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.006 14:03:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.006 14:03:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.006 14:03:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.007 14:03:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.007 14:03:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.264 14:03:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:21:16.264 14:03:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:21:16.264 14:03:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.264 14:03:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.264 14:03:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:16.264 14:03:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.264 14:03:55 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:16.264 14:03:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.264 14:03:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.264 14:03:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.264 14:03:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.264 14:03:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.264 14:03:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.264 14:03:55 -- paths/export.sh@5 -- # export PATH 00:21:16.264 14:03:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.264 14:03:55 -- nvmf/common.sh@47 -- # : 0 00:21:16.264 14:03:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:16.264 14:03:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:16.264 14:03:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.264 14:03:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.264 14:03:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.264 14:03:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:16.264 14:03:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:16.264 14:03:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:16.264 14:03:55 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:16.264 14:03:55 -- target/tls.sh@62 -- # nvmftestinit 00:21:16.264 14:03:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:16.264 14:03:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.264 14:03:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:16.264 14:03:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:16.264 14:03:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:16.264 14:03:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.264 14:03:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.264 14:03:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.264 14:03:55 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:16.264 14:03:55 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:16.264 14:03:55 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:16.264 14:03:55 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:16.264 14:03:55 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:16.264 14:03:55 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:16.264 14:03:55 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.264 14:03:55 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.264 14:03:55 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:16.264 14:03:55 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:16.264 14:03:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:16.264 14:03:55 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:16.264 14:03:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:16.264 14:03:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.264 14:03:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:16.264 14:03:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:16.264 14:03:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:16.264 14:03:55 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:16.264 14:03:55 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:16.264 14:03:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:16.264 Cannot find device "nvmf_tgt_br" 00:21:16.264 14:03:55 -- nvmf/common.sh@155 -- # true 00:21:16.264 14:03:55 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:16.264 Cannot find device "nvmf_tgt_br2" 00:21:16.264 14:03:55 -- nvmf/common.sh@156 -- # true 00:21:16.264 14:03:55 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:16.264 14:03:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:16.264 Cannot find device "nvmf_tgt_br" 00:21:16.264 14:03:55 -- nvmf/common.sh@158 -- # true 00:21:16.264 14:03:55 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:16.264 Cannot find device "nvmf_tgt_br2" 00:21:16.264 14:03:55 -- nvmf/common.sh@159 -- # true 00:21:16.264 14:03:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:16.264 14:03:55 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:16.264 14:03:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:16.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:16.264 14:03:55 -- nvmf/common.sh@162 -- # true 00:21:16.264 14:03:55 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:16.264 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:16.264 14:03:55 -- nvmf/common.sh@163 -- # true 00:21:16.264 14:03:55 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:16.264 14:03:55 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:16.264 14:03:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:16.264 14:03:55 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:16.264 14:03:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:16.521 14:03:55 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:16.521 14:03:55 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:16.521 14:03:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:16.521 14:03:55 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:16.521 14:03:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:16.521 14:03:55 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:16.521 14:03:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:16.521 14:03:56 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:16.521 14:03:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:16.521 14:03:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:16.521 14:03:56 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:16.521 14:03:56 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:16.521 14:03:56 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:16.521 14:03:56 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:16.521 14:03:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:16.521 14:03:56 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:16.521 14:03:56 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:16.521 14:03:56 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:16.521 14:03:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:16.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:21:16.521 00:21:16.521 --- 10.0.0.2 ping statistics --- 00:21:16.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.521 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:21:16.521 14:03:56 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:16.521 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:16.521 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:21:16.521 00:21:16.521 --- 10.0.0.3 ping statistics --- 00:21:16.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.521 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:21:16.521 14:03:56 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:16.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:21:16.521 00:21:16.521 --- 10.0.0.1 ping statistics --- 00:21:16.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.521 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:21:16.521 14:03:56 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.521 14:03:56 -- nvmf/common.sh@422 -- # return 0 00:21:16.521 14:03:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:16.521 14:03:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.521 14:03:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:16.521 14:03:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:16.521 14:03:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.521 14:03:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:16.521 14:03:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:16.521 14:03:56 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:16.521 14:03:56 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:16.521 14:03:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:16.521 14:03:56 -- common/autotest_common.sh@10 -- # set +x 00:21:16.521 14:03:56 -- nvmf/common.sh@470 -- # nvmfpid=78792 00:21:16.521 14:03:56 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:16.521 14:03:56 -- nvmf/common.sh@471 -- # waitforlisten 78792 00:21:16.521 14:03:56 -- common/autotest_common.sh@817 -- # '[' -z 78792 ']' 00:21:16.521 14:03:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.521 14:03:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:16.521 14:03:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.521 14:03:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:16.521 14:03:56 -- common/autotest_common.sh@10 -- # set +x 00:21:16.779 [2024-04-26 14:03:56.280298] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:16.779 [2024-04-26 14:03:56.280419] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.036 [2024-04-26 14:03:56.456803] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.036 [2024-04-26 14:03:56.692612] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.036 [2024-04-26 14:03:56.692700] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.036 [2024-04-26 14:03:56.692716] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.036 [2024-04-26 14:03:56.692753] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.036 [2024-04-26 14:03:56.692783] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.036 [2024-04-26 14:03:56.692825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.652 14:03:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:17.652 14:03:57 -- common/autotest_common.sh@850 -- # return 0 00:21:17.652 14:03:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:17.652 14:03:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:17.652 14:03:57 -- common/autotest_common.sh@10 -- # set +x 00:21:17.652 14:03:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.652 14:03:57 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:17.652 14:03:57 -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:17.911 true 00:21:17.911 14:03:57 -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:17.911 14:03:57 -- target/tls.sh@73 -- # jq -r .tls_version 00:21:17.911 14:03:57 -- target/tls.sh@73 -- # version=0 00:21:17.911 14:03:57 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:17.911 14:03:57 -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:18.169 14:03:57 -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:18.169 14:03:57 -- target/tls.sh@81 -- # jq -r .tls_version 00:21:18.427 14:03:57 -- target/tls.sh@81 -- # version=13 00:21:18.427 14:03:57 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:18.427 14:03:57 -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:18.685 14:03:58 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:18.685 14:03:58 -- target/tls.sh@89 -- # jq -r .tls_version 00:21:18.685 14:03:58 -- target/tls.sh@89 -- # version=7 00:21:18.685 14:03:58 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:18.685 14:03:58 -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:18.685 14:03:58 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:18.943 14:03:58 -- target/tls.sh@96 -- # ktls=false 00:21:18.943 14:03:58 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:18.943 14:03:58 -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:19.201 14:03:58 -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:19.201 14:03:58 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:19.459 14:03:58 -- target/tls.sh@104 -- # ktls=true 00:21:19.459 14:03:58 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:19.459 14:03:58 -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:19.718 14:03:59 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:19.718 14:03:59 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:19.718 14:03:59 -- target/tls.sh@112 -- # ktls=false 00:21:19.718 14:03:59 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:19.718 14:03:59 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:19.718 14:03:59 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:19.718 14:03:59 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:19.718 14:03:59 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:21:19.718 14:03:59 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:21:19.718 14:03:59 -- nvmf/common.sh@693 -- # digest=1 00:21:19.718 14:03:59 -- nvmf/common.sh@694 -- # python - 00:21:19.977 14:03:59 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:19.977 14:03:59 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:19.977 14:03:59 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:19.977 14:03:59 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:19.977 14:03:59 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:21:19.977 14:03:59 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:21:19.977 14:03:59 -- nvmf/common.sh@693 -- # digest=1 00:21:19.977 14:03:59 -- nvmf/common.sh@694 -- # python - 00:21:19.977 14:03:59 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:19.977 14:03:59 -- target/tls.sh@121 -- # mktemp 00:21:19.977 14:03:59 -- target/tls.sh@121 -- # key_path=/tmp/tmp.cu7t8ZzYHU 00:21:19.977 14:03:59 -- target/tls.sh@122 -- # mktemp 00:21:19.977 14:03:59 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.NMKpduGSAp 00:21:19.977 14:03:59 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:19.977 14:03:59 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:19.977 14:03:59 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.cu7t8ZzYHU 00:21:19.977 14:03:59 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.NMKpduGSAp 00:21:19.977 14:03:59 -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:20.235 14:03:59 -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:20.802 14:04:00 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.cu7t8ZzYHU 00:21:20.802 14:04:00 -- target/tls.sh@49 -- # local key=/tmp/tmp.cu7t8ZzYHU 00:21:20.802 14:04:00 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:21.060 [2024-04-26 14:04:00.548780] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.060 14:04:00 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:21.318 14:04:00 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:21.318 [2024-04-26 14:04:00.980278] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:21.318 [2024-04-26 14:04:00.980528] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.575 14:04:01 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:21.575 malloc0 00:21:21.834 14:04:01 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:21.834 14:04:01 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cu7t8ZzYHU 00:21:22.094 [2024-04-26 14:04:01.639102] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:22.094 14:04:01 -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.cu7t8ZzYHU 00:21:34.325 Initializing NVMe Controllers 00:21:34.325 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:34.325 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:34.325 Initialization complete. Launching workers. 00:21:34.325 ======================================================== 00:21:34.325 Latency(us) 00:21:34.325 Device Information : IOPS MiB/s Average min max 00:21:34.325 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10259.40 40.08 6239.50 1691.33 11594.30 00:21:34.325 ======================================================== 00:21:34.325 Total : 10259.40 40.08 6239.50 1691.33 11594.30 00:21:34.325 00:21:34.325 14:04:11 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.cu7t8ZzYHU 00:21:34.325 14:04:11 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:34.325 14:04:11 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:34.325 14:04:11 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:34.325 14:04:11 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cu7t8ZzYHU' 00:21:34.325 14:04:11 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.325 14:04:11 -- target/tls.sh@28 -- # bdevperf_pid=79145 00:21:34.325 14:04:11 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:34.325 14:04:11 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:34.325 14:04:11 -- target/tls.sh@31 -- # waitforlisten 79145 /var/tmp/bdevperf.sock 00:21:34.325 14:04:11 -- common/autotest_common.sh@817 -- # '[' -z 79145 ']' 00:21:34.325 14:04:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.325 14:04:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:34.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.325 14:04:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.325 14:04:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:34.325 14:04:11 -- common/autotest_common.sh@10 -- # set +x 00:21:34.325 [2024-04-26 14:04:12.091202] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:34.325 [2024-04-26 14:04:12.091362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79145 ] 00:21:34.325 [2024-04-26 14:04:12.267138] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.325 [2024-04-26 14:04:12.511122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.325 14:04:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:34.325 14:04:12 -- common/autotest_common.sh@850 -- # return 0 00:21:34.325 14:04:12 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cu7t8ZzYHU 00:21:34.325 [2024-04-26 14:04:13.118494] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:34.325 [2024-04-26 14:04:13.118657] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:34.325 TLSTESTn1 00:21:34.325 14:04:13 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:34.325 Running I/O for 10 seconds... 00:21:44.310 00:21:44.310 Latency(us) 00:21:44.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.310 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:44.310 Verification LBA range: start 0x0 length 0x2000 00:21:44.310 TLSTESTn1 : 10.02 4292.29 16.77 0.00 0.00 29769.28 6711.52 36636.99 00:21:44.310 =================================================================================================================== 00:21:44.310 Total : 4292.29 16.77 0.00 0.00 29769.28 6711.52 36636.99 00:21:44.310 0 00:21:44.311 14:04:23 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:44.311 14:04:23 -- target/tls.sh@45 -- # killprocess 79145 00:21:44.311 14:04:23 -- common/autotest_common.sh@936 -- # '[' -z 79145 ']' 00:21:44.311 14:04:23 -- common/autotest_common.sh@940 -- # kill -0 79145 00:21:44.311 14:04:23 -- common/autotest_common.sh@941 -- # uname 00:21:44.311 14:04:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:44.311 14:04:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79145 00:21:44.311 14:04:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:44.311 14:04:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:44.311 killing process with pid 79145 00:21:44.311 14:04:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79145' 00:21:44.311 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.311 00:21:44.311 Latency(us) 00:21:44.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.311 =================================================================================================================== 00:21:44.311 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.311 14:04:23 -- common/autotest_common.sh@955 -- # kill 79145 00:21:44.311 [2024-04-26 14:04:23.378579] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:44.311 14:04:23 -- common/autotest_common.sh@960 -- # wait 79145 00:21:45.245 14:04:24 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NMKpduGSAp 00:21:45.245 14:04:24 -- common/autotest_common.sh@638 -- # local es=0 00:21:45.245 14:04:24 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NMKpduGSAp 00:21:45.245 14:04:24 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:21:45.245 14:04:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:45.245 14:04:24 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:21:45.245 14:04:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:45.245 14:04:24 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NMKpduGSAp 00:21:45.245 14:04:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:45.245 14:04:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:45.245 14:04:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:45.245 14:04:24 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NMKpduGSAp' 00:21:45.245 14:04:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:45.245 14:04:24 -- target/tls.sh@28 -- # bdevperf_pid=79309 00:21:45.245 14:04:24 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:45.245 14:04:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:45.245 14:04:24 -- target/tls.sh@31 -- # waitforlisten 79309 /var/tmp/bdevperf.sock 00:21:45.245 14:04:24 -- common/autotest_common.sh@817 -- # '[' -z 79309 ']' 00:21:45.245 14:04:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.245 14:04:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:45.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.245 14:04:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.245 14:04:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:45.245 14:04:24 -- common/autotest_common.sh@10 -- # set +x 00:21:45.245 [2024-04-26 14:04:24.784952] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:45.245 [2024-04-26 14:04:24.785076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79309 ] 00:21:45.504 [2024-04-26 14:04:24.955638] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.763 [2024-04-26 14:04:25.194420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.331 14:04:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:46.331 14:04:25 -- common/autotest_common.sh@850 -- # return 0 00:21:46.331 14:04:25 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NMKpduGSAp 00:21:46.331 [2024-04-26 14:04:25.937040] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:46.331 [2024-04-26 14:04:25.937232] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:46.331 [2024-04-26 14:04:25.946199] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:46.331 [2024-04-26 14:04:25.947068] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:21:46.331 [2024-04-26 14:04:25.948046] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:21:46.331 [2024-04-26 14:04:25.949030] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.331 [2024-04-26 14:04:25.949068] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:46.331 [2024-04-26 14:04:25.949084] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.331 2024/04/26 14:04:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.NMKpduGSAp subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:21:46.331 request: 00:21:46.331 { 00:21:46.331 "method": "bdev_nvme_attach_controller", 00:21:46.331 "params": { 00:21:46.331 "name": "TLSTEST", 00:21:46.331 "trtype": "tcp", 00:21:46.331 "traddr": "10.0.0.2", 00:21:46.331 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:46.331 "adrfam": "ipv4", 00:21:46.331 "trsvcid": "4420", 00:21:46.331 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:46.331 "psk": "/tmp/tmp.NMKpduGSAp" 00:21:46.331 } 00:21:46.331 } 00:21:46.331 Got JSON-RPC error response 00:21:46.331 GoRPCClient: error on JSON-RPC call 00:21:46.331 14:04:25 -- target/tls.sh@36 -- # killprocess 79309 00:21:46.331 14:04:25 -- common/autotest_common.sh@936 -- # '[' -z 79309 ']' 00:21:46.331 14:04:25 -- common/autotest_common.sh@940 -- # kill -0 79309 00:21:46.331 14:04:25 -- common/autotest_common.sh@941 -- # uname 00:21:46.331 14:04:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:46.331 14:04:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79309 00:21:46.589 killing process with pid 79309 00:21:46.589 Received shutdown signal, test time was about 10.000000 seconds 00:21:46.589 00:21:46.589 Latency(us) 00:21:46.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.589 =================================================================================================================== 00:21:46.589 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:46.589 14:04:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:46.590 14:04:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:46.590 14:04:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79309' 00:21:46.590 14:04:26 -- common/autotest_common.sh@955 -- # kill 79309 00:21:46.590 14:04:26 -- common/autotest_common.sh@960 -- # wait 79309 00:21:46.590 [2024-04-26 14:04:26.009938] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:47.964 14:04:27 -- target/tls.sh@37 -- # return 1 00:21:47.964 14:04:27 -- common/autotest_common.sh@641 -- # es=1 00:21:47.964 14:04:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:47.964 14:04:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:47.964 14:04:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:47.964 14:04:27 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cu7t8ZzYHU 00:21:47.964 14:04:27 -- common/autotest_common.sh@638 -- # local es=0 00:21:47.964 14:04:27 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cu7t8ZzYHU 00:21:47.964 14:04:27 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:21:47.964 14:04:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:47.964 14:04:27 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:21:47.964 14:04:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:47.964 14:04:27 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.cu7t8ZzYHU 00:21:47.964 14:04:27 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:47.964 14:04:27 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:47.964 14:04:27 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:47.964 14:04:27 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cu7t8ZzYHU' 00:21:47.964 14:04:27 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:47.964 14:04:27 -- target/tls.sh@28 -- # bdevperf_pid=79365 00:21:47.964 14:04:27 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:47.964 14:04:27 -- target/tls.sh@31 -- # waitforlisten 79365 /var/tmp/bdevperf.sock 00:21:47.964 14:04:27 -- common/autotest_common.sh@817 -- # '[' -z 79365 ']' 00:21:47.964 14:04:27 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:47.964 14:04:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:47.964 14:04:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:47.964 14:04:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:47.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:47.964 14:04:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:47.964 14:04:27 -- common/autotest_common.sh@10 -- # set +x 00:21:47.964 [2024-04-26 14:04:27.416548] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:47.964 [2024-04-26 14:04:27.416692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79365 ] 00:21:47.964 [2024-04-26 14:04:27.585035] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.222 [2024-04-26 14:04:27.827562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.789 14:04:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:48.789 14:04:28 -- common/autotest_common.sh@850 -- # return 0 00:21:48.789 14:04:28 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.cu7t8ZzYHU 00:21:49.048 [2024-04-26 14:04:28.523424] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:49.048 [2024-04-26 14:04:28.523587] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:49.048 [2024-04-26 14:04:28.532552] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:49.048 [2024-04-26 14:04:28.532607] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:49.048 [2024-04-26 14:04:28.532694] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:49.048 [2024-04-26 14:04:28.533670] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:21:49.048 [2024-04-26 14:04:28.534650] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:21:49.048 [2024-04-26 14:04:28.535623] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.048 [2024-04-26 14:04:28.535664] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:49.048 [2024-04-26 14:04:28.535680] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.048 2024/04/26 14:04:28 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.cu7t8ZzYHU subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:21:49.048 request: 00:21:49.048 { 00:21:49.048 "method": "bdev_nvme_attach_controller", 00:21:49.048 "params": { 00:21:49.048 "name": "TLSTEST", 00:21:49.048 "trtype": "tcp", 00:21:49.048 "traddr": "10.0.0.2", 00:21:49.048 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:49.048 "adrfam": "ipv4", 00:21:49.048 "trsvcid": "4420", 00:21:49.048 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.048 "psk": "/tmp/tmp.cu7t8ZzYHU" 00:21:49.048 } 00:21:49.048 } 00:21:49.048 Got JSON-RPC error response 00:21:49.048 GoRPCClient: error on JSON-RPC call 00:21:49.048 14:04:28 -- target/tls.sh@36 -- # killprocess 79365 00:21:49.048 14:04:28 -- common/autotest_common.sh@936 -- # '[' -z 79365 ']' 00:21:49.048 14:04:28 -- common/autotest_common.sh@940 -- # kill -0 79365 00:21:49.048 14:04:28 -- common/autotest_common.sh@941 -- # uname 00:21:49.048 14:04:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:49.048 14:04:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79365 00:21:49.048 killing process with pid 79365 00:21:49.048 Received shutdown signal, test time was about 10.000000 seconds 00:21:49.048 00:21:49.048 Latency(us) 00:21:49.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.048 =================================================================================================================== 00:21:49.048 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:49.048 14:04:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:49.048 14:04:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:49.048 14:04:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79365' 00:21:49.048 14:04:28 -- common/autotest_common.sh@955 -- # kill 79365 00:21:49.048 [2024-04-26 14:04:28.601687] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:49.048 14:04:28 -- common/autotest_common.sh@960 -- # wait 79365 00:21:50.426 14:04:29 -- target/tls.sh@37 -- # return 1 00:21:50.426 14:04:29 -- common/autotest_common.sh@641 -- # es=1 00:21:50.426 14:04:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:50.426 14:04:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:50.426 14:04:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:50.426 14:04:29 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cu7t8ZzYHU 00:21:50.426 14:04:29 -- common/autotest_common.sh@638 -- # local es=0 00:21:50.426 14:04:29 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cu7t8ZzYHU 00:21:50.426 14:04:29 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:21:50.426 14:04:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:50.426 14:04:29 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:21:50.426 14:04:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:50.426 14:04:29 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.cu7t8ZzYHU 00:21:50.426 14:04:29 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:50.426 14:04:29 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:50.426 14:04:29 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:50.426 14:04:29 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.cu7t8ZzYHU' 00:21:50.426 14:04:29 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:50.426 14:04:29 -- target/tls.sh@28 -- # bdevperf_pid=79424 00:21:50.426 14:04:29 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:50.426 14:04:29 -- target/tls.sh@31 -- # waitforlisten 79424 /var/tmp/bdevperf.sock 00:21:50.426 14:04:29 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:50.426 14:04:29 -- common/autotest_common.sh@817 -- # '[' -z 79424 ']' 00:21:50.426 14:04:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.426 14:04:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:50.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.426 14:04:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.426 14:04:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:50.426 14:04:29 -- common/autotest_common.sh@10 -- # set +x 00:21:50.426 [2024-04-26 14:04:30.025766] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:50.426 [2024-04-26 14:04:30.026488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79424 ] 00:21:50.684 [2024-04-26 14:04:30.199239] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.943 [2024-04-26 14:04:30.446625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.512 14:04:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:51.512 14:04:30 -- common/autotest_common.sh@850 -- # return 0 00:21:51.512 14:04:30 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cu7t8ZzYHU 00:21:51.512 [2024-04-26 14:04:31.066275] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.512 [2024-04-26 14:04:31.066433] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:51.512 [2024-04-26 14:04:31.075097] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:51.512 [2024-04-26 14:04:31.075146] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:51.512 [2024-04-26 14:04:31.075219] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:51.512 [2024-04-26 14:04:31.075374] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:21:51.512 [2024-04-26 14:04:31.076349] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:21:51.512 [2024-04-26 14:04:31.077345] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:51.512 [2024-04-26 14:04:31.077377] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:51.512 [2024-04-26 14:04:31.077396] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:51.512 2024/04/26 14:04:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.cu7t8ZzYHU subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:21:51.512 request: 00:21:51.512 { 00:21:51.512 "method": "bdev_nvme_attach_controller", 00:21:51.512 "params": { 00:21:51.512 "name": "TLSTEST", 00:21:51.512 "trtype": "tcp", 00:21:51.512 "traddr": "10.0.0.2", 00:21:51.512 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:51.512 "adrfam": "ipv4", 00:21:51.512 "trsvcid": "4420", 00:21:51.512 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:51.512 "psk": "/tmp/tmp.cu7t8ZzYHU" 00:21:51.512 } 00:21:51.512 } 00:21:51.512 Got JSON-RPC error response 00:21:51.512 GoRPCClient: error on JSON-RPC call 00:21:51.512 14:04:31 -- target/tls.sh@36 -- # killprocess 79424 00:21:51.512 14:04:31 -- common/autotest_common.sh@936 -- # '[' -z 79424 ']' 00:21:51.512 14:04:31 -- common/autotest_common.sh@940 -- # kill -0 79424 00:21:51.512 14:04:31 -- common/autotest_common.sh@941 -- # uname 00:21:51.512 14:04:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:51.512 14:04:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79424 00:21:51.512 killing process with pid 79424 00:21:51.512 Received shutdown signal, test time was about 10.000000 seconds 00:21:51.512 00:21:51.512 Latency(us) 00:21:51.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.512 =================================================================================================================== 00:21:51.512 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:51.512 14:04:31 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:51.512 14:04:31 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:51.512 14:04:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79424' 00:21:51.512 14:04:31 -- common/autotest_common.sh@955 -- # kill 79424 00:21:51.512 [2024-04-26 14:04:31.142822] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:51.512 14:04:31 -- common/autotest_common.sh@960 -- # wait 79424 00:21:52.885 14:04:32 -- target/tls.sh@37 -- # return 1 00:21:52.885 14:04:32 -- common/autotest_common.sh@641 -- # es=1 00:21:52.885 14:04:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:52.885 14:04:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:52.885 14:04:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:52.886 14:04:32 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:52.886 14:04:32 -- common/autotest_common.sh@638 -- # local es=0 00:21:52.886 14:04:32 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:52.886 14:04:32 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:21:52.886 14:04:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:52.886 14:04:32 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:21:52.886 14:04:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:52.886 14:04:32 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:52.886 14:04:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:52.886 14:04:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:52.886 14:04:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:52.886 14:04:32 -- target/tls.sh@23 -- # psk= 00:21:52.886 14:04:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:52.886 14:04:32 -- target/tls.sh@28 -- # bdevperf_pid=79476 00:21:52.886 14:04:32 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:52.886 14:04:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:52.886 14:04:32 -- target/tls.sh@31 -- # waitforlisten 79476 /var/tmp/bdevperf.sock 00:21:52.886 14:04:32 -- common/autotest_common.sh@817 -- # '[' -z 79476 ']' 00:21:52.886 14:04:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:52.886 14:04:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:52.886 14:04:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:52.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:52.886 14:04:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:52.886 14:04:32 -- common/autotest_common.sh@10 -- # set +x 00:21:53.144 [2024-04-26 14:04:32.601893] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:53.144 [2024-04-26 14:04:32.602457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79476 ] 00:21:53.144 [2024-04-26 14:04:32.773934] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.403 [2024-04-26 14:04:33.015793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.974 14:04:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:53.974 14:04:33 -- common/autotest_common.sh@850 -- # return 0 00:21:53.974 14:04:33 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:53.974 [2024-04-26 14:04:33.631083] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:53.974 [2024-04-26 14:04:33.632637] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:21:53.974 [2024-04-26 14:04:33.633619] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:53.974 [2024-04-26 14:04:33.633667] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:53.974 [2024-04-26 14:04:33.633686] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:53.974 2024/04/26 14:04:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:21:53.974 request: 00:21:53.974 { 00:21:53.974 "method": "bdev_nvme_attach_controller", 00:21:53.974 "params": { 00:21:53.974 "name": "TLSTEST", 00:21:53.974 "trtype": "tcp", 00:21:53.974 "traddr": "10.0.0.2", 00:21:53.974 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:53.974 "adrfam": "ipv4", 00:21:53.974 "trsvcid": "4420", 00:21:53.974 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:21:53.974 } 00:21:53.974 } 00:21:53.974 Got JSON-RPC error response 00:21:53.974 GoRPCClient: error on JSON-RPC call 00:21:54.233 14:04:33 -- target/tls.sh@36 -- # killprocess 79476 00:21:54.233 14:04:33 -- common/autotest_common.sh@936 -- # '[' -z 79476 ']' 00:21:54.233 14:04:33 -- common/autotest_common.sh@940 -- # kill -0 79476 00:21:54.233 14:04:33 -- common/autotest_common.sh@941 -- # uname 00:21:54.233 14:04:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:54.233 14:04:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79476 00:21:54.233 14:04:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:21:54.233 14:04:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:21:54.233 killing process with pid 79476 00:21:54.233 14:04:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79476' 00:21:54.233 14:04:33 -- common/autotest_common.sh@955 -- # kill 79476 00:21:54.233 Received shutdown signal, test time was about 10.000000 seconds 00:21:54.233 00:21:54.233 Latency(us) 00:21:54.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.233 =================================================================================================================== 00:21:54.233 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:54.233 14:04:33 -- common/autotest_common.sh@960 -- # wait 79476 00:21:55.608 14:04:35 -- target/tls.sh@37 -- # return 1 00:21:55.608 14:04:35 -- common/autotest_common.sh@641 -- # es=1 00:21:55.608 14:04:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:55.608 14:04:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:55.608 14:04:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:55.608 14:04:35 -- target/tls.sh@158 -- # killprocess 78792 00:21:55.608 14:04:35 -- common/autotest_common.sh@936 -- # '[' -z 78792 ']' 00:21:55.608 14:04:35 -- common/autotest_common.sh@940 -- # kill -0 78792 00:21:55.608 14:04:35 -- common/autotest_common.sh@941 -- # uname 00:21:55.608 14:04:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:55.608 14:04:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78792 00:21:55.608 14:04:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:55.608 14:04:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:55.608 14:04:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78792' 00:21:55.608 killing process with pid 78792 00:21:55.608 14:04:35 -- common/autotest_common.sh@955 -- # kill 78792 00:21:55.608 [2024-04-26 14:04:35.072151] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:55.608 14:04:35 -- common/autotest_common.sh@960 -- # wait 78792 00:21:57.045 14:04:36 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:57.045 14:04:36 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:57.045 14:04:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:57.045 14:04:36 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:21:57.045 14:04:36 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:57.045 14:04:36 -- nvmf/common.sh@693 -- # digest=2 00:21:57.045 14:04:36 -- nvmf/common.sh@694 -- # python - 00:21:57.045 14:04:36 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:57.045 14:04:36 -- target/tls.sh@160 -- # mktemp 00:21:57.045 14:04:36 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.ITmYqK4ymx 00:21:57.045 14:04:36 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:57.045 14:04:36 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.ITmYqK4ymx 00:21:57.045 14:04:36 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:57.045 14:04:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:57.045 14:04:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:57.045 14:04:36 -- common/autotest_common.sh@10 -- # set +x 00:21:57.045 14:04:36 -- nvmf/common.sh@470 -- # nvmfpid=79562 00:21:57.045 14:04:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:57.045 14:04:36 -- nvmf/common.sh@471 -- # waitforlisten 79562 00:21:57.045 14:04:36 -- common/autotest_common.sh@817 -- # '[' -z 79562 ']' 00:21:57.045 14:04:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.045 14:04:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:57.045 14:04:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.045 14:04:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:57.045 14:04:36 -- common/autotest_common.sh@10 -- # set +x 00:21:57.302 [2024-04-26 14:04:36.761250] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:57.302 [2024-04-26 14:04:36.761940] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.302 [2024-04-26 14:04:36.937003] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.561 [2024-04-26 14:04:37.188584] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.561 [2024-04-26 14:04:37.188638] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.561 [2024-04-26 14:04:37.188654] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.561 [2024-04-26 14:04:37.188678] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.561 [2024-04-26 14:04:37.188692] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.561 [2024-04-26 14:04:37.188735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.127 14:04:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:58.127 14:04:37 -- common/autotest_common.sh@850 -- # return 0 00:21:58.127 14:04:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:58.127 14:04:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:58.127 14:04:37 -- common/autotest_common.sh@10 -- # set +x 00:21:58.127 14:04:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.127 14:04:37 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.ITmYqK4ymx 00:21:58.127 14:04:37 -- target/tls.sh@49 -- # local key=/tmp/tmp.ITmYqK4ymx 00:21:58.127 14:04:37 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:58.385 [2024-04-26 14:04:37.876246] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.385 14:04:37 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:58.645 14:04:38 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:58.645 [2024-04-26 14:04:38.291708] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:58.645 [2024-04-26 14:04:38.291960] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.645 14:04:38 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:58.904 malloc0 00:21:58.904 14:04:38 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:59.163 14:04:38 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ITmYqK4ymx 00:21:59.422 [2024-04-26 14:04:38.980785] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:59.422 14:04:39 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ITmYqK4ymx 00:21:59.422 14:04:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:59.422 14:04:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:59.422 14:04:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:59.422 14:04:39 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ITmYqK4ymx' 00:21:59.422 14:04:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:59.422 14:04:39 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:59.422 14:04:39 -- target/tls.sh@28 -- # bdevperf_pid=79665 00:21:59.422 14:04:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:59.422 14:04:39 -- target/tls.sh@31 -- # waitforlisten 79665 /var/tmp/bdevperf.sock 00:21:59.422 14:04:39 -- common/autotest_common.sh@817 -- # '[' -z 79665 ']' 00:21:59.422 14:04:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.422 14:04:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:59.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.422 14:04:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.422 14:04:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:59.422 14:04:39 -- common/autotest_common.sh@10 -- # set +x 00:21:59.422 [2024-04-26 14:04:39.087489] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:21:59.422 [2024-04-26 14:04:39.087627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79665 ] 00:21:59.680 [2024-04-26 14:04:39.247371] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.938 [2024-04-26 14:04:39.498365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:00.506 14:04:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:00.506 14:04:39 -- common/autotest_common.sh@850 -- # return 0 00:22:00.506 14:04:39 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ITmYqK4ymx 00:22:00.506 [2024-04-26 14:04:40.136284] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:00.506 [2024-04-26 14:04:40.136433] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:00.765 TLSTESTn1 00:22:00.765 14:04:40 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:00.765 Running I/O for 10 seconds... 00:22:10.746 00:22:10.746 Latency(us) 00:22:10.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.746 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:10.746 Verification LBA range: start 0x0 length 0x2000 00:22:10.746 TLSTESTn1 : 10.03 4047.14 15.81 0.00 0.00 31569.69 9422.44 22003.25 00:22:10.746 =================================================================================================================== 00:22:10.746 Total : 4047.14 15.81 0.00 0.00 31569.69 9422.44 22003.25 00:22:10.746 0 00:22:10.746 14:04:50 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:10.746 14:04:50 -- target/tls.sh@45 -- # killprocess 79665 00:22:10.746 14:04:50 -- common/autotest_common.sh@936 -- # '[' -z 79665 ']' 00:22:10.746 14:04:50 -- common/autotest_common.sh@940 -- # kill -0 79665 00:22:10.746 14:04:50 -- common/autotest_common.sh@941 -- # uname 00:22:10.746 14:04:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:10.746 14:04:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79665 00:22:11.005 killing process with pid 79665 00:22:11.005 Received shutdown signal, test time was about 10.000000 seconds 00:22:11.005 00:22:11.005 Latency(us) 00:22:11.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.005 =================================================================================================================== 00:22:11.005 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:11.005 14:04:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:11.005 14:04:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:11.005 14:04:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79665' 00:22:11.005 14:04:50 -- common/autotest_common.sh@955 -- # kill 79665 00:22:11.005 [2024-04-26 14:04:50.422518] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:11.005 14:04:50 -- common/autotest_common.sh@960 -- # wait 79665 00:22:12.380 14:04:51 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.ITmYqK4ymx 00:22:12.380 14:04:51 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ITmYqK4ymx 00:22:12.380 14:04:51 -- common/autotest_common.sh@638 -- # local es=0 00:22:12.380 14:04:51 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ITmYqK4ymx 00:22:12.380 14:04:51 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:22:12.380 14:04:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:12.380 14:04:51 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:22:12.380 14:04:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:12.380 14:04:51 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ITmYqK4ymx 00:22:12.380 14:04:51 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:12.380 14:04:51 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:12.380 14:04:51 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:12.380 14:04:51 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ITmYqK4ymx' 00:22:12.380 14:04:51 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:12.380 14:04:51 -- target/tls.sh@28 -- # bdevperf_pid=79824 00:22:12.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.380 14:04:51 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:12.380 14:04:51 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:12.380 14:04:51 -- target/tls.sh@31 -- # waitforlisten 79824 /var/tmp/bdevperf.sock 00:22:12.380 14:04:51 -- common/autotest_common.sh@817 -- # '[' -z 79824 ']' 00:22:12.380 14:04:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.380 14:04:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:12.380 14:04:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.380 14:04:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:12.380 14:04:51 -- common/autotest_common.sh@10 -- # set +x 00:22:12.380 [2024-04-26 14:04:51.915172] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:12.380 [2024-04-26 14:04:51.915335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79824 ] 00:22:12.639 [2024-04-26 14:04:52.089922] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.898 [2024-04-26 14:04:52.350703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:13.157 14:04:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:13.157 14:04:52 -- common/autotest_common.sh@850 -- # return 0 00:22:13.157 14:04:52 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ITmYqK4ymx 00:22:13.416 [2024-04-26 14:04:52.984712] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:13.416 [2024-04-26 14:04:52.984802] bdev_nvme.c:6071:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:13.416 [2024-04-26 14:04:52.984816] bdev_nvme.c:6180:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.ITmYqK4ymx 00:22:13.416 2024/04/26 14:04:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.ITmYqK4ymx subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:22:13.416 request: 00:22:13.416 { 00:22:13.416 "method": "bdev_nvme_attach_controller", 00:22:13.416 "params": { 00:22:13.416 "name": "TLSTEST", 00:22:13.416 "trtype": "tcp", 00:22:13.416 "traddr": "10.0.0.2", 00:22:13.416 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:13.416 "adrfam": "ipv4", 00:22:13.416 "trsvcid": "4420", 00:22:13.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:13.416 "psk": "/tmp/tmp.ITmYqK4ymx" 00:22:13.416 } 00:22:13.416 } 00:22:13.416 Got JSON-RPC error response 00:22:13.416 GoRPCClient: error on JSON-RPC call 00:22:13.416 14:04:53 -- target/tls.sh@36 -- # killprocess 79824 00:22:13.416 14:04:53 -- common/autotest_common.sh@936 -- # '[' -z 79824 ']' 00:22:13.416 14:04:53 -- common/autotest_common.sh@940 -- # kill -0 79824 00:22:13.416 14:04:53 -- common/autotest_common.sh@941 -- # uname 00:22:13.416 14:04:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:13.416 14:04:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79824 00:22:13.416 killing process with pid 79824 00:22:13.416 Received shutdown signal, test time was about 10.000000 seconds 00:22:13.416 00:22:13.416 Latency(us) 00:22:13.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.416 =================================================================================================================== 00:22:13.416 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:13.416 14:04:53 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:13.416 14:04:53 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:13.416 14:04:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79824' 00:22:13.416 14:04:53 -- common/autotest_common.sh@955 -- # kill 79824 00:22:13.416 14:04:53 -- common/autotest_common.sh@960 -- # wait 79824 00:22:14.820 14:04:54 -- target/tls.sh@37 -- # return 1 00:22:14.820 14:04:54 -- common/autotest_common.sh@641 -- # es=1 00:22:14.820 14:04:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:14.820 14:04:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:14.820 14:04:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:14.820 14:04:54 -- target/tls.sh@174 -- # killprocess 79562 00:22:14.820 14:04:54 -- common/autotest_common.sh@936 -- # '[' -z 79562 ']' 00:22:14.820 14:04:54 -- common/autotest_common.sh@940 -- # kill -0 79562 00:22:14.820 14:04:54 -- common/autotest_common.sh@941 -- # uname 00:22:14.820 14:04:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:14.820 14:04:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79562 00:22:14.820 killing process with pid 79562 00:22:14.820 14:04:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:14.820 14:04:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:14.820 14:04:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79562' 00:22:14.820 14:04:54 -- common/autotest_common.sh@955 -- # kill 79562 00:22:14.820 [2024-04-26 14:04:54.356993] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:14.820 14:04:54 -- common/autotest_common.sh@960 -- # wait 79562 00:22:16.197 14:04:55 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:16.197 14:04:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:16.197 14:04:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:16.197 14:04:55 -- common/autotest_common.sh@10 -- # set +x 00:22:16.197 14:04:55 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:16.197 14:04:55 -- nvmf/common.sh@470 -- # nvmfpid=79904 00:22:16.197 14:04:55 -- nvmf/common.sh@471 -- # waitforlisten 79904 00:22:16.197 14:04:55 -- common/autotest_common.sh@817 -- # '[' -z 79904 ']' 00:22:16.198 14:04:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.198 14:04:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:16.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.198 14:04:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.198 14:04:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:16.198 14:04:55 -- common/autotest_common.sh@10 -- # set +x 00:22:16.456 [2024-04-26 14:04:55.878615] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:16.456 [2024-04-26 14:04:55.878748] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.456 [2024-04-26 14:04:56.053107] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.715 [2024-04-26 14:04:56.289912] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.715 [2024-04-26 14:04:56.289974] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.715 [2024-04-26 14:04:56.289990] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.715 [2024-04-26 14:04:56.290013] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.715 [2024-04-26 14:04:56.290027] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.715 [2024-04-26 14:04:56.290066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.282 14:04:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:17.282 14:04:56 -- common/autotest_common.sh@850 -- # return 0 00:22:17.282 14:04:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:17.282 14:04:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:17.282 14:04:56 -- common/autotest_common.sh@10 -- # set +x 00:22:17.282 14:04:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.282 14:04:56 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.ITmYqK4ymx 00:22:17.282 14:04:56 -- common/autotest_common.sh@638 -- # local es=0 00:22:17.282 14:04:56 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.ITmYqK4ymx 00:22:17.282 14:04:56 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:22:17.282 14:04:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:17.282 14:04:56 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:22:17.282 14:04:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:17.282 14:04:56 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.ITmYqK4ymx 00:22:17.282 14:04:56 -- target/tls.sh@49 -- # local key=/tmp/tmp.ITmYqK4ymx 00:22:17.282 14:04:56 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:17.584 [2024-04-26 14:04:56.979323] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.584 14:04:57 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:17.584 14:04:57 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:17.843 [2024-04-26 14:04:57.402779] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:17.843 [2024-04-26 14:04:57.403058] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.843 14:04:57 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:18.101 malloc0 00:22:18.101 14:04:57 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:18.360 14:04:57 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ITmYqK4ymx 00:22:18.619 [2024-04-26 14:04:58.064427] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:18.619 [2024-04-26 14:04:58.064486] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:18.619 [2024-04-26 14:04:58.064516] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:22:18.619 2024/04/26 14:04:58 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.ITmYqK4ymx], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:22:18.619 request: 00:22:18.619 { 00:22:18.619 "method": "nvmf_subsystem_add_host", 00:22:18.619 "params": { 00:22:18.619 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:18.619 "host": "nqn.2016-06.io.spdk:host1", 00:22:18.619 "psk": "/tmp/tmp.ITmYqK4ymx" 00:22:18.619 } 00:22:18.619 } 00:22:18.619 Got JSON-RPC error response 00:22:18.619 GoRPCClient: error on JSON-RPC call 00:22:18.619 14:04:58 -- common/autotest_common.sh@641 -- # es=1 00:22:18.619 14:04:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:18.619 14:04:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:18.619 14:04:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:18.619 14:04:58 -- target/tls.sh@180 -- # killprocess 79904 00:22:18.619 14:04:58 -- common/autotest_common.sh@936 -- # '[' -z 79904 ']' 00:22:18.619 14:04:58 -- common/autotest_common.sh@940 -- # kill -0 79904 00:22:18.619 14:04:58 -- common/autotest_common.sh@941 -- # uname 00:22:18.619 14:04:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:18.619 14:04:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79904 00:22:18.619 14:04:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:18.619 14:04:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:18.619 killing process with pid 79904 00:22:18.619 14:04:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79904' 00:22:18.619 14:04:58 -- common/autotest_common.sh@955 -- # kill 79904 00:22:18.619 14:04:58 -- common/autotest_common.sh@960 -- # wait 79904 00:22:19.999 14:04:59 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.ITmYqK4ymx 00:22:19.999 14:04:59 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:19.999 14:04:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:19.999 14:04:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:19.999 14:04:59 -- common/autotest_common.sh@10 -- # set +x 00:22:19.999 14:04:59 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:19.999 14:04:59 -- nvmf/common.sh@470 -- # nvmfpid=80028 00:22:19.999 14:04:59 -- nvmf/common.sh@471 -- # waitforlisten 80028 00:22:19.999 14:04:59 -- common/autotest_common.sh@817 -- # '[' -z 80028 ']' 00:22:19.999 14:04:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.999 14:04:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:19.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.999 14:04:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.999 14:04:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:19.999 14:04:59 -- common/autotest_common.sh@10 -- # set +x 00:22:20.258 [2024-04-26 14:04:59.696541] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:20.258 [2024-04-26 14:04:59.696665] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.258 [2024-04-26 14:04:59.872013] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.516 [2024-04-26 14:05:00.119966] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.516 [2024-04-26 14:05:00.120027] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.516 [2024-04-26 14:05:00.120043] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.516 [2024-04-26 14:05:00.120066] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.516 [2024-04-26 14:05:00.120079] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.516 [2024-04-26 14:05:00.120115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.082 14:05:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:21.082 14:05:00 -- common/autotest_common.sh@850 -- # return 0 00:22:21.082 14:05:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:21.082 14:05:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:21.082 14:05:00 -- common/autotest_common.sh@10 -- # set +x 00:22:21.082 14:05:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:21.082 14:05:00 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.ITmYqK4ymx 00:22:21.082 14:05:00 -- target/tls.sh@49 -- # local key=/tmp/tmp.ITmYqK4ymx 00:22:21.082 14:05:00 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:21.340 [2024-04-26 14:05:00.825807] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:21.340 14:05:00 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:21.598 14:05:01 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:21.598 [2024-04-26 14:05:01.233638] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:21.598 [2024-04-26 14:05:01.233885] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:21.598 14:05:01 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:21.867 malloc0 00:22:22.125 14:05:01 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:22.383 14:05:01 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ITmYqK4ymx 00:22:22.383 [2024-04-26 14:05:02.026700] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:22.383 14:05:02 -- target/tls.sh@188 -- # bdevperf_pid=80125 00:22:22.383 14:05:02 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:22.383 14:05:02 -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:22.383 14:05:02 -- target/tls.sh@191 -- # waitforlisten 80125 /var/tmp/bdevperf.sock 00:22:22.383 14:05:02 -- common/autotest_common.sh@817 -- # '[' -z 80125 ']' 00:22:22.383 14:05:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:22.383 14:05:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:22.383 14:05:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:22.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:22.383 14:05:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:22.383 14:05:02 -- common/autotest_common.sh@10 -- # set +x 00:22:22.641 [2024-04-26 14:05:02.182300] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:22.641 [2024-04-26 14:05:02.182531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80125 ] 00:22:22.899 [2024-04-26 14:05:02.360788] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.206 [2024-04-26 14:05:02.609938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.472 14:05:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:23.472 14:05:03 -- common/autotest_common.sh@850 -- # return 0 00:22:23.472 14:05:03 -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ITmYqK4ymx 00:22:23.730 [2024-04-26 14:05:03.242509] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:23.730 [2024-04-26 14:05:03.242659] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:23.730 TLSTESTn1 00:22:23.730 14:05:03 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:23.988 14:05:03 -- target/tls.sh@196 -- # tgtconf='{ 00:22:23.988 "subsystems": [ 00:22:23.988 { 00:22:23.988 "subsystem": "keyring", 00:22:23.988 "config": [] 00:22:23.988 }, 00:22:23.988 { 00:22:23.988 "subsystem": "iobuf", 00:22:23.988 "config": [ 00:22:23.988 { 00:22:23.988 "method": "iobuf_set_options", 00:22:23.988 "params": { 00:22:23.988 "large_bufsize": 135168, 00:22:23.988 "large_pool_count": 1024, 00:22:23.988 "small_bufsize": 8192, 00:22:23.988 "small_pool_count": 8192 00:22:23.988 } 00:22:23.988 } 00:22:23.988 ] 00:22:23.988 }, 00:22:23.988 { 00:22:23.988 "subsystem": "sock", 00:22:23.988 "config": [ 00:22:23.988 { 00:22:23.988 "method": "sock_impl_set_options", 00:22:23.988 "params": { 00:22:23.988 "enable_ktls": false, 00:22:23.988 "enable_placement_id": 0, 00:22:23.988 "enable_quickack": false, 00:22:23.988 "enable_recv_pipe": true, 00:22:23.988 "enable_zerocopy_send_client": false, 00:22:23.988 "enable_zerocopy_send_server": true, 00:22:23.988 "impl_name": "posix", 00:22:23.988 "recv_buf_size": 2097152, 00:22:23.988 "send_buf_size": 2097152, 00:22:23.988 "tls_version": 0, 00:22:23.988 "zerocopy_threshold": 0 00:22:23.988 } 00:22:23.988 }, 00:22:23.988 { 00:22:23.988 "method": "sock_impl_set_options", 00:22:23.988 "params": { 00:22:23.988 "enable_ktls": false, 00:22:23.988 "enable_placement_id": 0, 00:22:23.988 "enable_quickack": false, 00:22:23.988 "enable_recv_pipe": true, 00:22:23.988 "enable_zerocopy_send_client": false, 00:22:23.988 "enable_zerocopy_send_server": true, 00:22:23.988 "impl_name": "ssl", 00:22:23.988 "recv_buf_size": 4096, 00:22:23.988 "send_buf_size": 4096, 00:22:23.988 "tls_version": 0, 00:22:23.988 "zerocopy_threshold": 0 00:22:23.988 } 00:22:23.988 } 00:22:23.988 ] 00:22:23.988 }, 00:22:23.988 { 00:22:23.988 "subsystem": "vmd", 00:22:23.988 "config": [] 00:22:23.988 }, 00:22:23.988 { 00:22:23.988 "subsystem": "accel", 00:22:23.988 "config": [ 00:22:23.988 { 00:22:23.988 "method": "accel_set_options", 00:22:23.988 "params": { 00:22:23.988 "buf_count": 2048, 00:22:23.988 "large_cache_size": 16, 00:22:23.988 "sequence_count": 2048, 00:22:23.988 "small_cache_size": 128, 00:22:23.988 "task_count": 2048 00:22:23.988 } 00:22:23.988 } 00:22:23.988 ] 00:22:23.988 }, 00:22:23.988 { 00:22:23.988 "subsystem": "bdev", 00:22:23.988 "config": [ 00:22:23.988 { 00:22:23.988 "method": "bdev_set_options", 00:22:23.988 "params": { 00:22:23.988 "bdev_auto_examine": true, 00:22:23.988 "bdev_io_cache_size": 256, 00:22:23.988 "bdev_io_pool_size": 65535, 00:22:23.988 "iobuf_large_cache_size": 16, 00:22:23.988 "iobuf_small_cache_size": 128 00:22:23.988 } 00:22:23.988 }, 00:22:23.988 { 00:22:23.988 "method": "bdev_raid_set_options", 00:22:23.988 "params": { 00:22:23.988 "process_window_size_kb": 1024 00:22:23.988 } 00:22:23.988 }, 00:22:23.988 { 00:22:23.988 "method": "bdev_iscsi_set_options", 00:22:23.988 "params": { 00:22:23.988 "timeout_sec": 30 00:22:23.988 } 00:22:23.988 }, 00:22:23.988 { 00:22:23.988 "method": "bdev_nvme_set_options", 00:22:23.988 "params": { 00:22:23.988 "action_on_timeout": "none", 00:22:23.988 "allow_accel_sequence": false, 00:22:23.988 "arbitration_burst": 0, 00:22:23.988 "bdev_retry_count": 3, 00:22:23.988 "ctrlr_loss_timeout_sec": 0, 00:22:23.988 "delay_cmd_submit": true, 00:22:23.988 "dhchap_dhgroups": [ 00:22:23.988 "null", 00:22:23.988 "ffdhe2048", 00:22:23.988 "ffdhe3072", 00:22:23.988 "ffdhe4096", 00:22:23.988 "ffdhe6144", 00:22:23.988 "ffdhe8192" 00:22:23.988 ], 00:22:23.988 "dhchap_digests": [ 00:22:23.988 "sha256", 00:22:23.988 "sha384", 00:22:23.988 "sha512" 00:22:23.988 ], 00:22:23.988 "disable_auto_failback": false, 00:22:23.988 "fast_io_fail_timeout_sec": 0, 00:22:23.988 "generate_uuids": false, 00:22:23.988 "high_priority_weight": 0, 00:22:23.988 "io_path_stat": false, 00:22:23.988 "io_queue_requests": 0, 00:22:23.988 "keep_alive_timeout_ms": 10000, 00:22:23.988 "low_priority_weight": 0, 00:22:23.988 "medium_priority_weight": 0, 00:22:23.988 "nvme_adminq_poll_period_us": 10000, 00:22:23.988 "nvme_error_stat": false, 00:22:23.988 "nvme_ioq_poll_period_us": 0, 00:22:23.988 "rdma_cm_event_timeout_ms": 0, 00:22:23.988 "rdma_max_cq_size": 0, 00:22:23.988 "rdma_srq_size": 0, 00:22:23.988 "reconnect_delay_sec": 0, 00:22:23.988 "timeout_admin_us": 0, 00:22:23.989 "timeout_us": 0, 00:22:23.989 "transport_ack_timeout": 0, 00:22:23.989 "transport_retry_count": 4, 00:22:23.989 "transport_tos": 0 00:22:23.989 } 00:22:23.989 }, 00:22:23.989 { 00:22:23.989 "method": "bdev_nvme_set_hotplug", 00:22:23.989 "params": { 00:22:23.989 "enable": false, 00:22:23.989 "period_us": 100000 00:22:23.989 } 00:22:23.989 }, 00:22:23.989 { 00:22:23.989 "method": "bdev_malloc_create", 00:22:23.989 "params": { 00:22:23.989 "block_size": 4096, 00:22:23.989 "name": "malloc0", 00:22:23.989 "num_blocks": 8192, 00:22:23.989 "optimal_io_boundary": 0, 00:22:23.989 "physical_block_size": 4096, 00:22:23.989 "uuid": "afe62213-fb4b-419f-967d-086554e29742" 00:22:23.989 } 00:22:23.989 }, 00:22:23.989 { 00:22:23.989 "method": "bdev_wait_for_examine" 00:22:23.989 } 00:22:23.989 ] 00:22:23.989 }, 00:22:23.989 { 00:22:23.989 "subsystem": "nbd", 00:22:23.989 "config": [] 00:22:23.989 }, 00:22:23.989 { 00:22:23.989 "subsystem": "scheduler", 00:22:23.989 "config": [ 00:22:23.989 { 00:22:23.989 "method": "framework_set_scheduler", 00:22:23.989 "params": { 00:22:23.989 "name": "static" 00:22:23.989 } 00:22:23.989 } 00:22:23.989 ] 00:22:23.989 }, 00:22:23.989 { 00:22:23.989 "subsystem": "nvmf", 00:22:23.989 "config": [ 00:22:23.989 { 00:22:23.989 "method": "nvmf_set_config", 00:22:23.989 "params": { 00:22:23.989 "admin_cmd_passthru": { 00:22:23.989 "identify_ctrlr": false 00:22:23.989 }, 00:22:23.989 "discovery_filter": "match_any" 00:22:23.989 } 00:22:23.989 }, 00:22:23.989 { 00:22:23.989 "method": "nvmf_set_max_subsystems", 00:22:23.989 "params": { 00:22:23.989 "max_subsystems": 1024 00:22:23.989 } 00:22:23.989 }, 00:22:23.989 { 00:22:23.989 "method": "nvmf_set_crdt", 00:22:23.989 "params": { 00:22:23.989 "crdt1": 0, 00:22:23.989 "crdt2": 0, 00:22:23.989 "crdt3": 0 00:22:23.989 } 00:22:23.989 }, 00:22:23.989 { 00:22:23.989 "method": "nvmf_create_transport", 00:22:23.989 "params": { 00:22:23.989 "abort_timeout_sec": 1, 00:22:23.989 "ack_timeout": 0, 00:22:23.989 "buf_cache_size": 4294967295, 00:22:23.989 "c2h_success": false, 00:22:23.989 "data_wr_pool_size": 0, 00:22:23.989 "dif_insert_or_strip": false, 00:22:23.989 "in_capsule_data_size": 4096, 00:22:23.989 "io_unit_size": 131072, 00:22:23.989 "max_aq_depth": 128, 00:22:23.989 "max_io_qpairs_per_ctrlr": 127, 00:22:23.989 "max_io_size": 131072, 00:22:23.989 "max_queue_depth": 128, 00:22:23.989 "num_shared_buffers": 511, 00:22:23.989 "sock_priority": 0, 00:22:23.989 "trtype": "TCP", 00:22:23.989 "zcopy": false 00:22:23.989 } 00:22:23.989 }, 00:22:23.989 { 00:22:23.989 "method": "nvmf_create_subsystem", 00:22:23.989 "params": { 00:22:23.989 "allow_any_host": false, 00:22:23.989 "ana_reporting": false, 00:22:23.989 "max_cntlid": 65519, 00:22:23.989 "max_namespaces": 10, 00:22:23.989 "min_cntlid": 1, 00:22:23.989 "model_number": "SPDK bdev Controller", 00:22:23.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.989 "serial_number": "SPDK00000000000001" 00:22:23.989 } 00:22:23.989 }, 00:22:23.989 { 00:22:23.989 "method": "nvmf_subsystem_add_host", 00:22:23.989 "params": { 00:22:23.989 "host": "nqn.2016-06.io.spdk:host1", 00:22:23.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.989 "psk": "/tmp/tmp.ITmYqK4ymx" 00:22:23.989 } 00:22:23.989 }, 00:22:23.989 { 00:22:23.989 "method": "nvmf_subsystem_add_ns", 00:22:23.989 "params": { 00:22:23.989 "namespace": { 00:22:23.989 "bdev_name": "malloc0", 00:22:23.989 "nguid": "AFE62213FB4B419F967D086554E29742", 00:22:23.989 "no_auto_visible": false, 00:22:23.989 "nsid": 1, 00:22:23.989 "uuid": "afe62213-fb4b-419f-967d-086554e29742" 00:22:23.989 }, 00:22:23.989 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:23.989 } 00:22:23.989 }, 00:22:23.989 { 00:22:23.989 "method": "nvmf_subsystem_add_listener", 00:22:23.989 "params": { 00:22:23.989 "listen_address": { 00:22:23.989 "adrfam": "IPv4", 00:22:23.989 "traddr": "10.0.0.2", 00:22:23.989 "trsvcid": "4420", 00:22:23.989 "trtype": "TCP" 00:22:23.989 }, 00:22:23.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.989 "secure_channel": true 00:22:23.989 } 00:22:23.989 } 00:22:23.989 ] 00:22:23.989 } 00:22:23.989 ] 00:22:23.989 }' 00:22:23.989 14:05:03 -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:24.248 14:05:03 -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:24.248 "subsystems": [ 00:22:24.248 { 00:22:24.248 "subsystem": "keyring", 00:22:24.248 "config": [] 00:22:24.248 }, 00:22:24.248 { 00:22:24.248 "subsystem": "iobuf", 00:22:24.248 "config": [ 00:22:24.248 { 00:22:24.248 "method": "iobuf_set_options", 00:22:24.248 "params": { 00:22:24.248 "large_bufsize": 135168, 00:22:24.248 "large_pool_count": 1024, 00:22:24.248 "small_bufsize": 8192, 00:22:24.248 "small_pool_count": 8192 00:22:24.248 } 00:22:24.248 } 00:22:24.248 ] 00:22:24.248 }, 00:22:24.248 { 00:22:24.248 "subsystem": "sock", 00:22:24.248 "config": [ 00:22:24.248 { 00:22:24.248 "method": "sock_impl_set_options", 00:22:24.248 "params": { 00:22:24.248 "enable_ktls": false, 00:22:24.248 "enable_placement_id": 0, 00:22:24.248 "enable_quickack": false, 00:22:24.248 "enable_recv_pipe": true, 00:22:24.248 "enable_zerocopy_send_client": false, 00:22:24.248 "enable_zerocopy_send_server": true, 00:22:24.248 "impl_name": "posix", 00:22:24.248 "recv_buf_size": 2097152, 00:22:24.248 "send_buf_size": 2097152, 00:22:24.248 "tls_version": 0, 00:22:24.248 "zerocopy_threshold": 0 00:22:24.248 } 00:22:24.248 }, 00:22:24.248 { 00:22:24.248 "method": "sock_impl_set_options", 00:22:24.248 "params": { 00:22:24.248 "enable_ktls": false, 00:22:24.248 "enable_placement_id": 0, 00:22:24.248 "enable_quickack": false, 00:22:24.248 "enable_recv_pipe": true, 00:22:24.248 "enable_zerocopy_send_client": false, 00:22:24.248 "enable_zerocopy_send_server": true, 00:22:24.248 "impl_name": "ssl", 00:22:24.248 "recv_buf_size": 4096, 00:22:24.248 "send_buf_size": 4096, 00:22:24.248 "tls_version": 0, 00:22:24.248 "zerocopy_threshold": 0 00:22:24.248 } 00:22:24.248 } 00:22:24.248 ] 00:22:24.248 }, 00:22:24.248 { 00:22:24.248 "subsystem": "vmd", 00:22:24.248 "config": [] 00:22:24.248 }, 00:22:24.248 { 00:22:24.248 "subsystem": "accel", 00:22:24.248 "config": [ 00:22:24.248 { 00:22:24.248 "method": "accel_set_options", 00:22:24.248 "params": { 00:22:24.248 "buf_count": 2048, 00:22:24.248 "large_cache_size": 16, 00:22:24.248 "sequence_count": 2048, 00:22:24.248 "small_cache_size": 128, 00:22:24.248 "task_count": 2048 00:22:24.248 } 00:22:24.248 } 00:22:24.248 ] 00:22:24.248 }, 00:22:24.248 { 00:22:24.248 "subsystem": "bdev", 00:22:24.248 "config": [ 00:22:24.248 { 00:22:24.248 "method": "bdev_set_options", 00:22:24.248 "params": { 00:22:24.248 "bdev_auto_examine": true, 00:22:24.248 "bdev_io_cache_size": 256, 00:22:24.248 "bdev_io_pool_size": 65535, 00:22:24.248 "iobuf_large_cache_size": 16, 00:22:24.248 "iobuf_small_cache_size": 128 00:22:24.248 } 00:22:24.248 }, 00:22:24.248 { 00:22:24.248 "method": "bdev_raid_set_options", 00:22:24.248 "params": { 00:22:24.248 "process_window_size_kb": 1024 00:22:24.248 } 00:22:24.248 }, 00:22:24.248 { 00:22:24.248 "method": "bdev_iscsi_set_options", 00:22:24.248 "params": { 00:22:24.248 "timeout_sec": 30 00:22:24.248 } 00:22:24.248 }, 00:22:24.248 { 00:22:24.248 "method": "bdev_nvme_set_options", 00:22:24.248 "params": { 00:22:24.248 "action_on_timeout": "none", 00:22:24.248 "allow_accel_sequence": false, 00:22:24.248 "arbitration_burst": 0, 00:22:24.248 "bdev_retry_count": 3, 00:22:24.248 "ctrlr_loss_timeout_sec": 0, 00:22:24.248 "delay_cmd_submit": true, 00:22:24.248 "dhchap_dhgroups": [ 00:22:24.248 "null", 00:22:24.248 "ffdhe2048", 00:22:24.248 "ffdhe3072", 00:22:24.248 "ffdhe4096", 00:22:24.248 "ffdhe6144", 00:22:24.248 "ffdhe8192" 00:22:24.248 ], 00:22:24.248 "dhchap_digests": [ 00:22:24.248 "sha256", 00:22:24.248 "sha384", 00:22:24.248 "sha512" 00:22:24.248 ], 00:22:24.248 "disable_auto_failback": false, 00:22:24.248 "fast_io_fail_timeout_sec": 0, 00:22:24.248 "generate_uuids": false, 00:22:24.248 "high_priority_weight": 0, 00:22:24.248 "io_path_stat": false, 00:22:24.248 "io_queue_requests": 512, 00:22:24.248 "keep_alive_timeout_ms": 10000, 00:22:24.248 "low_priority_weight": 0, 00:22:24.248 "medium_priority_weight": 0, 00:22:24.248 "nvme_adminq_poll_period_us": 10000, 00:22:24.248 "nvme_error_stat": false, 00:22:24.248 "nvme_ioq_poll_period_us": 0, 00:22:24.248 "rdma_cm_event_timeout_ms": 0, 00:22:24.248 "rdma_max_cq_size": 0, 00:22:24.248 "rdma_srq_size": 0, 00:22:24.248 "reconnect_delay_sec": 0, 00:22:24.248 "timeout_admin_us": 0, 00:22:24.248 "timeout_us": 0, 00:22:24.248 "transport_ack_timeout": 0, 00:22:24.248 "transport_retry_count": 4, 00:22:24.248 "transport_tos": 0 00:22:24.248 } 00:22:24.248 }, 00:22:24.248 { 00:22:24.248 "method": "bdev_nvme_attach_controller", 00:22:24.248 "params": { 00:22:24.248 "adrfam": "IPv4", 00:22:24.248 "ctrlr_loss_timeout_sec": 0, 00:22:24.248 "ddgst": false, 00:22:24.248 "fast_io_fail_timeout_sec": 0, 00:22:24.248 "hdgst": false, 00:22:24.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:24.248 "name": "TLSTEST", 00:22:24.248 "prchk_guard": false, 00:22:24.248 "prchk_reftag": false, 00:22:24.248 "psk": "/tmp/tmp.ITmYqK4ymx", 00:22:24.248 "reconnect_delay_sec": 0, 00:22:24.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.248 "traddr": "10.0.0.2", 00:22:24.248 "trsvcid": "4420", 00:22:24.248 "trtype": "TCP" 00:22:24.248 } 00:22:24.248 }, 00:22:24.248 { 00:22:24.248 "method": "bdev_nvme_set_hotplug", 00:22:24.248 "params": { 00:22:24.248 "enable": false, 00:22:24.248 "period_us": 100000 00:22:24.248 } 00:22:24.248 }, 00:22:24.248 { 00:22:24.248 "method": "bdev_wait_for_examine" 00:22:24.248 } 00:22:24.248 ] 00:22:24.248 }, 00:22:24.248 { 00:22:24.248 "subsystem": "nbd", 00:22:24.248 "config": [] 00:22:24.248 } 00:22:24.248 ] 00:22:24.248 }' 00:22:24.248 14:05:03 -- target/tls.sh@199 -- # killprocess 80125 00:22:24.248 14:05:03 -- common/autotest_common.sh@936 -- # '[' -z 80125 ']' 00:22:24.248 14:05:03 -- common/autotest_common.sh@940 -- # kill -0 80125 00:22:24.248 14:05:03 -- common/autotest_common.sh@941 -- # uname 00:22:24.248 14:05:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:24.248 14:05:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80125 00:22:24.583 killing process with pid 80125 00:22:24.583 Received shutdown signal, test time was about 10.000000 seconds 00:22:24.583 00:22:24.583 Latency(us) 00:22:24.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.583 =================================================================================================================== 00:22:24.583 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:24.583 14:05:03 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:24.583 14:05:03 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:24.583 14:05:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80125' 00:22:24.583 14:05:03 -- common/autotest_common.sh@955 -- # kill 80125 00:22:24.583 [2024-04-26 14:05:03.943144] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:24.583 14:05:03 -- common/autotest_common.sh@960 -- # wait 80125 00:22:25.958 14:05:05 -- target/tls.sh@200 -- # killprocess 80028 00:22:25.958 14:05:05 -- common/autotest_common.sh@936 -- # '[' -z 80028 ']' 00:22:25.958 14:05:05 -- common/autotest_common.sh@940 -- # kill -0 80028 00:22:25.958 14:05:05 -- common/autotest_common.sh@941 -- # uname 00:22:25.958 14:05:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:25.958 14:05:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80028 00:22:25.958 killing process with pid 80028 00:22:25.958 14:05:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:25.958 14:05:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:25.958 14:05:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80028' 00:22:25.958 14:05:05 -- common/autotest_common.sh@955 -- # kill 80028 00:22:25.958 [2024-04-26 14:05:05.307373] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:25.958 14:05:05 -- common/autotest_common.sh@960 -- # wait 80028 00:22:27.335 14:05:06 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:27.335 14:05:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:27.335 14:05:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:27.335 14:05:06 -- common/autotest_common.sh@10 -- # set +x 00:22:27.335 14:05:06 -- target/tls.sh@203 -- # echo '{ 00:22:27.335 "subsystems": [ 00:22:27.335 { 00:22:27.335 "subsystem": "keyring", 00:22:27.335 "config": [] 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "subsystem": "iobuf", 00:22:27.335 "config": [ 00:22:27.335 { 00:22:27.335 "method": "iobuf_set_options", 00:22:27.335 "params": { 00:22:27.335 "large_bufsize": 135168, 00:22:27.335 "large_pool_count": 1024, 00:22:27.335 "small_bufsize": 8192, 00:22:27.335 "small_pool_count": 8192 00:22:27.335 } 00:22:27.335 } 00:22:27.335 ] 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "subsystem": "sock", 00:22:27.335 "config": [ 00:22:27.335 { 00:22:27.335 "method": "sock_impl_set_options", 00:22:27.335 "params": { 00:22:27.335 "enable_ktls": false, 00:22:27.335 "enable_placement_id": 0, 00:22:27.335 "enable_quickack": false, 00:22:27.335 "enable_recv_pipe": true, 00:22:27.335 "enable_zerocopy_send_client": false, 00:22:27.335 "enable_zerocopy_send_server": true, 00:22:27.335 "impl_name": "posix", 00:22:27.335 "recv_buf_size": 2097152, 00:22:27.335 "send_buf_size": 2097152, 00:22:27.335 "tls_version": 0, 00:22:27.335 "zerocopy_threshold": 0 00:22:27.335 } 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "method": "sock_impl_set_options", 00:22:27.335 "params": { 00:22:27.335 "enable_ktls": false, 00:22:27.335 "enable_placement_id": 0, 00:22:27.335 "enable_quickack": false, 00:22:27.335 "enable_recv_pipe": true, 00:22:27.335 "enable_zerocopy_send_client": false, 00:22:27.335 "enable_zerocopy_send_server": true, 00:22:27.335 "impl_name": "ssl", 00:22:27.335 "recv_buf_size": 4096, 00:22:27.335 "send_buf_size": 4096, 00:22:27.335 "tls_version": 0, 00:22:27.335 "zerocopy_threshold": 0 00:22:27.335 } 00:22:27.335 } 00:22:27.335 ] 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "subsystem": "vmd", 00:22:27.335 "config": [] 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "subsystem": "accel", 00:22:27.335 "config": [ 00:22:27.335 { 00:22:27.335 "method": "accel_set_options", 00:22:27.335 "params": { 00:22:27.335 "buf_count": 2048, 00:22:27.335 "large_cache_size": 16, 00:22:27.335 "sequence_count": 2048, 00:22:27.335 "small_cache_size": 128, 00:22:27.335 "task_count": 2048 00:22:27.335 } 00:22:27.335 } 00:22:27.335 ] 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "subsystem": "bdev", 00:22:27.335 "config": [ 00:22:27.335 { 00:22:27.335 "method": "bdev_set_options", 00:22:27.335 "params": { 00:22:27.335 "bdev_auto_examine": true, 00:22:27.335 "bdev_io_cache_size": 256, 00:22:27.335 "bdev_io_pool_size": 65535, 00:22:27.335 "iobuf_large_cache_size": 16, 00:22:27.335 "iobuf_small_cache_size": 128 00:22:27.335 } 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "method": "bdev_raid_set_options", 00:22:27.335 "params": { 00:22:27.335 "process_window_size_kb": 1024 00:22:27.335 } 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "method": "bdev_iscsi_set_options", 00:22:27.335 "params": { 00:22:27.335 "timeout_sec": 30 00:22:27.335 } 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "method": "bdev_nvme_set_options", 00:22:27.335 "params": { 00:22:27.335 "action_on_timeout": "none", 00:22:27.335 "allow_accel_sequence": false, 00:22:27.335 "arbitration_burst": 0, 00:22:27.335 "bdev_retry_count": 3, 00:22:27.335 "ctrlr_loss_timeout_sec": 0, 00:22:27.335 "delay_cmd_submit": true, 00:22:27.335 "dhchap_dhgroups": [ 00:22:27.335 "null", 00:22:27.335 "ffdhe2048", 00:22:27.335 "ffdhe3072", 00:22:27.335 "ffdhe4096", 00:22:27.335 "ffdhe6144", 00:22:27.335 "ffdhe8192" 00:22:27.335 ], 00:22:27.335 "dhchap_digests": [ 00:22:27.335 "sha256", 00:22:27.335 "sha384", 00:22:27.335 "sha512" 00:22:27.335 ], 00:22:27.335 "disable_auto_failback": false, 00:22:27.335 "fast_io_fail_timeout_sec": 0, 00:22:27.335 "generate_uuids": false, 00:22:27.335 "high_priority_weight": 0, 00:22:27.335 "io_path_stat": false, 00:22:27.335 "io_queue_requests": 0, 00:22:27.335 "keep_alive_timeout_ms": 10000, 00:22:27.335 "low_priority_weight": 0, 00:22:27.335 "medium_priority_weight": 0, 00:22:27.335 "nvme_adminq_poll_period_us": 10000, 00:22:27.335 "nvme_error_stat": false, 00:22:27.335 "nvme_ioq_poll_period_us": 0, 00:22:27.335 "rdma_cm_event_timeout_ms": 0, 00:22:27.335 "rdma_max_cq_size": 0, 00:22:27.335 "rdma_srq_size": 0, 00:22:27.335 "reconnect_delay_sec": 0, 00:22:27.335 "timeout_admin_us": 0, 00:22:27.335 "timeout_us": 0, 00:22:27.335 "transport_ack_timeout": 0, 00:22:27.335 "transport_retry_count": 4, 00:22:27.335 "transport_tos": 0 00:22:27.335 } 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "method": "bdev_nvme_set_hotplug", 00:22:27.335 "params": { 00:22:27.335 "enable": false, 00:22:27.335 "period_us": 100000 00:22:27.335 } 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "method": "bdev_malloc_create", 00:22:27.335 "params": { 00:22:27.335 "block_size": 4096, 00:22:27.335 "name": "malloc0", 00:22:27.335 "num_blocks": 8192, 00:22:27.335 "optimal_io_boundary": 0, 00:22:27.335 "physical_block_size": 4096, 00:22:27.335 "uuid": "afe62213-fb4b-419f-967d-086554e29742" 00:22:27.335 } 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "method": "bdev_wait_for_examine" 00:22:27.335 } 00:22:27.335 ] 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "subsystem": "nbd", 00:22:27.335 "config": [] 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "subsystem": "scheduler", 00:22:27.335 "config": [ 00:22:27.335 { 00:22:27.335 "method": "framework_set_scheduler", 00:22:27.335 "params": { 00:22:27.335 "name": "static" 00:22:27.335 } 00:22:27.335 } 00:22:27.335 ] 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "subsystem": "nvmf", 00:22:27.335 "config": [ 00:22:27.335 { 00:22:27.335 "method": "nvmf_set_config", 00:22:27.335 "params": { 00:22:27.335 "admin_cmd_passthru": { 00:22:27.335 "identify_ctrlr": false 00:22:27.335 }, 00:22:27.335 "discovery_filter": "match_any" 00:22:27.335 } 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "method": "nvmf_set_max_subsystems", 00:22:27.335 "params": { 00:22:27.335 "max_subsystems": 1024 00:22:27.335 } 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "method": "nvmf_set_crdt", 00:22:27.335 "params": { 00:22:27.335 "crdt1": 0, 00:22:27.335 "crdt2": 0, 00:22:27.335 "crdt3": 0 00:22:27.335 } 00:22:27.335 }, 00:22:27.335 { 00:22:27.335 "method": "nvmf_create_transport", 00:22:27.335 "params": { 00:22:27.336 "abort_timeout_sec": 1, 00:22:27.336 "ack_timeout": 0, 00:22:27.336 "buf_cache_size": 4294967295, 00:22:27.336 "c2h_success": false, 00:22:27.336 "data_wr_pool_size": 0, 00:22:27.336 "dif_insert_or_strip": false, 00:22:27.336 "in_capsule_data_size": 4096, 00:22:27.336 "io_unit_size": 131072, 00:22:27.336 "max_aq_depth": 128, 00:22:27.336 "max_io_qpairs_per_ctrlr": 127, 00:22:27.336 "max_io_size": 131072, 00:22:27.336 "max_queue_depth": 128, 00:22:27.336 "num_shared_buffers": 511, 00:22:27.336 "sock_priority": 0, 00:22:27.336 "trtype": "TCP", 00:22:27.336 "zcopy": false 00:22:27.336 } 00:22:27.336 }, 00:22:27.336 { 00:22:27.336 "method": "nvmf_create_subsystem", 00:22:27.336 "params": { 00:22:27.336 "allow_any_host": false, 00:22:27.336 "ana_reporting": false, 00:22:27.336 "max_cntlid": 65519, 00:22:27.336 "max_namespaces": 10, 00:22:27.336 "min_cntlid": 1, 00:22:27.336 "model_number": "SPDK bdev Controller", 00:22:27.336 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.336 "serial_number": "SPDK00000000000001" 00:22:27.336 } 00:22:27.336 }, 00:22:27.336 { 00:22:27.336 "method": "nvmf_subsystem_add_host", 00:22:27.336 "params": { 00:22:27.336 "host": "nqn.2016-06.io.spdk:host1", 00:22:27.336 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.336 "psk": "/tmp/tmp.ITmYqK4ymx" 00:22:27.336 } 00:22:27.336 }, 00:22:27.336 { 00:22:27.336 "method": "nvmf_subsystem_add_ns", 00:22:27.336 "params": { 00:22:27.336 "namespace": { 00:22:27.336 "bdev_name": "malloc0", 00:22:27.336 "nguid": "AFE62213FB4B419F967D086554E29742", 00:22:27.336 "no_auto_visible": false, 00:22:27.336 "nsid": 1, 00:22:27.336 "uuid": "afe62213-fb4b-419f-967d-086554e29742" 00:22:27.336 }, 00:22:27.336 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:27.336 } 00:22:27.336 }, 00:22:27.336 { 00:22:27.336 "method": "nvmf_subsystem_add_listener", 00:22:27.336 "params": { 00:22:27.336 "listen_address": { 00:22:27.336 "adrfam": "IPv4", 00:22:27.336 "traddr": "10.0.0.2", 00:22:27.336 "trsvcid": "4420", 00:22:27.336 "trtype": "TCP" 00:22:27.336 }, 00:22:27.336 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.336 "secure_channel": true 00:22:27.336 } 00:22:27.336 } 00:22:27.336 ] 00:22:27.336 } 00:22:27.336 ] 00:22:27.336 }' 00:22:27.336 14:05:06 -- nvmf/common.sh@470 -- # nvmfpid=80228 00:22:27.336 14:05:06 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:27.336 14:05:06 -- nvmf/common.sh@471 -- # waitforlisten 80228 00:22:27.336 14:05:06 -- common/autotest_common.sh@817 -- # '[' -z 80228 ']' 00:22:27.336 14:05:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.336 14:05:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:27.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.336 14:05:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.336 14:05:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:27.336 14:05:06 -- common/autotest_common.sh@10 -- # set +x 00:22:27.336 [2024-04-26 14:05:06.820709] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:27.336 [2024-04-26 14:05:06.820842] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:27.336 [2024-04-26 14:05:06.998114] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.595 [2024-04-26 14:05:07.246331] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.595 [2024-04-26 14:05:07.246392] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.595 [2024-04-26 14:05:07.246408] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.595 [2024-04-26 14:05:07.246430] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.595 [2024-04-26 14:05:07.246444] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.595 [2024-04-26 14:05:07.246583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.162 [2024-04-26 14:05:07.810617] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.162 [2024-04-26 14:05:07.826541] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:28.421 [2024-04-26 14:05:07.842506] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:28.421 [2024-04-26 14:05:07.842757] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.421 14:05:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:28.421 14:05:07 -- common/autotest_common.sh@850 -- # return 0 00:22:28.421 14:05:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:28.421 14:05:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:28.421 14:05:07 -- common/autotest_common.sh@10 -- # set +x 00:22:28.421 14:05:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.421 14:05:07 -- target/tls.sh@207 -- # bdevperf_pid=80272 00:22:28.421 14:05:07 -- target/tls.sh@208 -- # waitforlisten 80272 /var/tmp/bdevperf.sock 00:22:28.421 14:05:07 -- common/autotest_common.sh@817 -- # '[' -z 80272 ']' 00:22:28.421 14:05:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.421 14:05:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:28.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.421 14:05:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.421 14:05:07 -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:28.421 14:05:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:28.421 14:05:07 -- common/autotest_common.sh@10 -- # set +x 00:22:28.421 14:05:07 -- target/tls.sh@204 -- # echo '{ 00:22:28.421 "subsystems": [ 00:22:28.421 { 00:22:28.421 "subsystem": "keyring", 00:22:28.421 "config": [] 00:22:28.421 }, 00:22:28.421 { 00:22:28.421 "subsystem": "iobuf", 00:22:28.421 "config": [ 00:22:28.421 { 00:22:28.421 "method": "iobuf_set_options", 00:22:28.421 "params": { 00:22:28.421 "large_bufsize": 135168, 00:22:28.421 "large_pool_count": 1024, 00:22:28.421 "small_bufsize": 8192, 00:22:28.421 "small_pool_count": 8192 00:22:28.421 } 00:22:28.421 } 00:22:28.421 ] 00:22:28.421 }, 00:22:28.421 { 00:22:28.421 "subsystem": "sock", 00:22:28.421 "config": [ 00:22:28.421 { 00:22:28.421 "method": "sock_impl_set_options", 00:22:28.421 "params": { 00:22:28.421 "enable_ktls": false, 00:22:28.421 "enable_placement_id": 0, 00:22:28.421 "enable_quickack": false, 00:22:28.421 "enable_recv_pipe": true, 00:22:28.421 "enable_zerocopy_send_client": false, 00:22:28.421 "enable_zerocopy_send_server": true, 00:22:28.421 "impl_name": "posix", 00:22:28.421 "recv_buf_size": 2097152, 00:22:28.421 "send_buf_size": 2097152, 00:22:28.421 "tls_version": 0, 00:22:28.421 "zerocopy_threshold": 0 00:22:28.421 } 00:22:28.421 }, 00:22:28.421 { 00:22:28.421 "method": "sock_impl_set_options", 00:22:28.421 "params": { 00:22:28.421 "enable_ktls": false, 00:22:28.421 "enable_placement_id": 0, 00:22:28.421 "enable_quickack": false, 00:22:28.421 "enable_recv_pipe": true, 00:22:28.421 "enable_zerocopy_send_client": false, 00:22:28.421 "enable_zerocopy_send_server": true, 00:22:28.421 "impl_name": "ssl", 00:22:28.421 "recv_buf_size": 4096, 00:22:28.421 "send_buf_size": 4096, 00:22:28.421 "tls_version": 0, 00:22:28.421 "zerocopy_threshold": 0 00:22:28.421 } 00:22:28.421 } 00:22:28.421 ] 00:22:28.421 }, 00:22:28.421 { 00:22:28.421 "subsystem": "vmd", 00:22:28.421 "config": [] 00:22:28.421 }, 00:22:28.421 { 00:22:28.421 "subsystem": "accel", 00:22:28.421 "config": [ 00:22:28.421 { 00:22:28.421 "method": "accel_set_options", 00:22:28.421 "params": { 00:22:28.421 "buf_count": 2048, 00:22:28.421 "large_cache_size": 16, 00:22:28.421 "sequence_count": 2048, 00:22:28.421 "small_cache_size": 128, 00:22:28.421 "task_count": 2048 00:22:28.421 } 00:22:28.421 } 00:22:28.421 ] 00:22:28.421 }, 00:22:28.421 { 00:22:28.421 "subsystem": "bdev", 00:22:28.421 "config": [ 00:22:28.421 { 00:22:28.421 "method": "bdev_set_options", 00:22:28.421 "params": { 00:22:28.421 "bdev_auto_examine": true, 00:22:28.421 "bdev_io_cache_size": 256, 00:22:28.421 "bdev_io_pool_size": 65535, 00:22:28.421 "iobuf_large_cache_size": 16, 00:22:28.421 "iobuf_small_cache_size": 128 00:22:28.421 } 00:22:28.421 }, 00:22:28.421 { 00:22:28.421 "method": "bdev_raid_set_options", 00:22:28.421 "params": { 00:22:28.421 "process_window_size_kb": 1024 00:22:28.421 } 00:22:28.421 }, 00:22:28.421 { 00:22:28.421 "method": "bdev_iscsi_set_options", 00:22:28.421 "params": { 00:22:28.421 "timeout_sec": 30 00:22:28.421 } 00:22:28.421 }, 00:22:28.421 { 00:22:28.421 "method": "bdev_nvme_set_options", 00:22:28.421 "params": { 00:22:28.421 "action_on_timeout": "none", 00:22:28.421 "allow_accel_sequence": false, 00:22:28.421 "arbitration_burst": 0, 00:22:28.421 "bdev_retry_count": 3, 00:22:28.421 "ctrlr_loss_timeout_sec": 0, 00:22:28.421 "delay_cmd_submit": true, 00:22:28.421 "dhchap_dhgroups": [ 00:22:28.421 "null", 00:22:28.421 "ffdhe2048", 00:22:28.421 "ffdhe3072", 00:22:28.421 "ffdhe4096", 00:22:28.421 "ffdhe6144", 00:22:28.421 "ffdhe8192" 00:22:28.421 ], 00:22:28.421 "dhchap_digests": [ 00:22:28.421 "sha256", 00:22:28.421 "sha384", 00:22:28.421 "sha512" 00:22:28.421 ], 00:22:28.421 "disable_auto_failback": false, 00:22:28.421 "fast_io_fail_timeout_sec": 0, 00:22:28.421 "generate_uuids": false, 00:22:28.421 "high_priority_weight": 0, 00:22:28.421 "io_path_stat": false, 00:22:28.421 "io_queue_requests": 512, 00:22:28.421 "keep_alive_timeout_ms": 10000, 00:22:28.421 "low_priority_weight": 0, 00:22:28.421 "medium_priority_weight": 0, 00:22:28.421 "nvme_adminq_poll_period_us": 10000, 00:22:28.421 "nvme_error_stat": false, 00:22:28.421 "nvme_ioq_poll_period_us": 0, 00:22:28.421 "rdma_cm_event_timeout_ms": 0, 00:22:28.421 "rdma_max_cq_size": 0, 00:22:28.421 "rdma_srq_size": 0, 00:22:28.421 "reconnect_delay_sec": 0, 00:22:28.421 "timeout_admin_us": 0, 00:22:28.421 "timeout_us": 0, 00:22:28.421 "transport_ack_timeout": 0, 00:22:28.421 "transport_retry_count": 4, 00:22:28.421 "transport_tos": 0 00:22:28.421 } 00:22:28.421 }, 00:22:28.421 { 00:22:28.421 "method": "bdev_nvme_attach_controller", 00:22:28.421 "params": { 00:22:28.421 "adrfam": "IPv4", 00:22:28.421 "ctrlr_loss_timeout_sec": 0, 00:22:28.421 "ddgst": false, 00:22:28.421 "fast_io_fail_timeout_sec": 0, 00:22:28.421 "hdgst": false, 00:22:28.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:28.421 "name": "TLSTEST", 00:22:28.421 "prchk_guard": false, 00:22:28.421 "prchk_reftag": false, 00:22:28.421 "psk": "/tmp/tmp.ITmYqK4ymx", 00:22:28.421 "reconnect_delay_sec": 0, 00:22:28.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.421 "traddr": "10.0.0.2", 00:22:28.421 "trsvcid": "4420", 00:22:28.421 "trtype": "TCP" 00:22:28.421 } 00:22:28.421 }, 00:22:28.421 { 00:22:28.421 "method": "bdev_nvme_set_hotplug", 00:22:28.421 "params": { 00:22:28.421 "enable": false, 00:22:28.421 "period_us": 100000 00:22:28.421 } 00:22:28.421 }, 00:22:28.421 { 00:22:28.421 "method": "bdev_wait_for_examine" 00:22:28.421 } 00:22:28.421 ] 00:22:28.421 }, 00:22:28.421 { 00:22:28.421 "subsystem": "nbd", 00:22:28.421 "config": [] 00:22:28.421 } 00:22:28.421 ] 00:22:28.421 }' 00:22:28.421 [2024-04-26 14:05:08.042013] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:28.422 [2024-04-26 14:05:08.042149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80272 ] 00:22:28.680 [2024-04-26 14:05:08.215319] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.938 [2024-04-26 14:05:08.463969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.505 [2024-04-26 14:05:08.922421] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:29.505 [2024-04-26 14:05:08.922569] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:29.505 14:05:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:29.505 14:05:09 -- common/autotest_common.sh@850 -- # return 0 00:22:29.505 14:05:09 -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:29.505 Running I/O for 10 seconds... 00:22:41.710 00:22:41.710 Latency(us) 00:22:41.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.710 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:41.710 Verification LBA range: start 0x0 length 0x2000 00:22:41.710 TLSTESTn1 : 10.02 3920.88 15.32 0.00 0.00 32580.24 8317.02 33057.52 00:22:41.710 =================================================================================================================== 00:22:41.710 Total : 3920.88 15.32 0.00 0.00 32580.24 8317.02 33057.52 00:22:41.710 0 00:22:41.710 14:05:19 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:41.710 14:05:19 -- target/tls.sh@214 -- # killprocess 80272 00:22:41.710 14:05:19 -- common/autotest_common.sh@936 -- # '[' -z 80272 ']' 00:22:41.710 14:05:19 -- common/autotest_common.sh@940 -- # kill -0 80272 00:22:41.710 14:05:19 -- common/autotest_common.sh@941 -- # uname 00:22:41.710 14:05:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:41.710 14:05:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80272 00:22:41.710 14:05:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:22:41.710 14:05:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:22:41.710 killing process with pid 80272 00:22:41.710 14:05:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80272' 00:22:41.710 14:05:19 -- common/autotest_common.sh@955 -- # kill 80272 00:22:41.710 Received shutdown signal, test time was about 10.000000 seconds 00:22:41.710 00:22:41.710 Latency(us) 00:22:41.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.710 =================================================================================================================== 00:22:41.710 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:41.710 [2024-04-26 14:05:19.223189] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:41.710 14:05:19 -- common/autotest_common.sh@960 -- # wait 80272 00:22:41.710 14:05:20 -- target/tls.sh@215 -- # killprocess 80228 00:22:41.710 14:05:20 -- common/autotest_common.sh@936 -- # '[' -z 80228 ']' 00:22:41.710 14:05:20 -- common/autotest_common.sh@940 -- # kill -0 80228 00:22:41.710 14:05:20 -- common/autotest_common.sh@941 -- # uname 00:22:41.710 14:05:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:41.710 14:05:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80228 00:22:41.710 14:05:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:41.710 14:05:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:41.710 killing process with pid 80228 00:22:41.710 14:05:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80228' 00:22:41.710 14:05:20 -- common/autotest_common.sh@955 -- # kill 80228 00:22:41.710 14:05:20 -- common/autotest_common.sh@960 -- # wait 80228 00:22:41.710 [2024-04-26 14:05:20.659223] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:42.648 14:05:22 -- target/tls.sh@218 -- # nvmfappstart 00:22:42.648 14:05:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:42.648 14:05:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:42.648 14:05:22 -- common/autotest_common.sh@10 -- # set +x 00:22:42.648 14:05:22 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:42.648 14:05:22 -- nvmf/common.sh@470 -- # nvmfpid=80450 00:22:42.648 14:05:22 -- nvmf/common.sh@471 -- # waitforlisten 80450 00:22:42.648 14:05:22 -- common/autotest_common.sh@817 -- # '[' -z 80450 ']' 00:22:42.648 14:05:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.648 14:05:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:42.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.648 14:05:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.648 14:05:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:42.648 14:05:22 -- common/autotest_common.sh@10 -- # set +x 00:22:42.648 [2024-04-26 14:05:22.310403] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:42.648 [2024-04-26 14:05:22.310520] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.907 [2024-04-26 14:05:22.481109] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.166 [2024-04-26 14:05:22.741074] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.166 [2024-04-26 14:05:22.741133] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.166 [2024-04-26 14:05:22.741150] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.166 [2024-04-26 14:05:22.741185] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.166 [2024-04-26 14:05:22.741199] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.166 [2024-04-26 14:05:22.741248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.735 14:05:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:43.735 14:05:23 -- common/autotest_common.sh@850 -- # return 0 00:22:43.735 14:05:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:43.735 14:05:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:43.735 14:05:23 -- common/autotest_common.sh@10 -- # set +x 00:22:43.735 14:05:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.735 14:05:23 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.ITmYqK4ymx 00:22:43.735 14:05:23 -- target/tls.sh@49 -- # local key=/tmp/tmp.ITmYqK4ymx 00:22:43.735 14:05:23 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:43.995 [2024-04-26 14:05:23.453379] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.995 14:05:23 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:44.254 14:05:23 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:44.254 [2024-04-26 14:05:23.888826] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:44.254 [2024-04-26 14:05:23.889096] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.254 14:05:23 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:44.513 malloc0 00:22:44.513 14:05:24 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:44.771 14:05:24 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ITmYqK4ymx 00:22:45.031 [2024-04-26 14:05:24.591738] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:45.031 14:05:24 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:45.031 14:05:24 -- target/tls.sh@222 -- # bdevperf_pid=80552 00:22:45.031 14:05:24 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:45.031 14:05:24 -- target/tls.sh@225 -- # waitforlisten 80552 /var/tmp/bdevperf.sock 00:22:45.031 14:05:24 -- common/autotest_common.sh@817 -- # '[' -z 80552 ']' 00:22:45.031 14:05:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.031 14:05:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:45.031 14:05:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.031 14:05:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:45.031 14:05:24 -- common/autotest_common.sh@10 -- # set +x 00:22:45.031 [2024-04-26 14:05:24.702345] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:45.031 [2024-04-26 14:05:24.702504] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80552 ] 00:22:45.290 [2024-04-26 14:05:24.876177] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.549 [2024-04-26 14:05:25.132476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.118 14:05:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:46.118 14:05:25 -- common/autotest_common.sh@850 -- # return 0 00:22:46.119 14:05:25 -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ITmYqK4ymx 00:22:46.378 14:05:25 -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:46.378 [2024-04-26 14:05:26.015324] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:46.637 nvme0n1 00:22:46.637 14:05:26 -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:46.637 Running I/O for 1 seconds... 00:22:48.018 00:22:48.018 Latency(us) 00:22:48.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.018 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:48.018 Verification LBA range: start 0x0 length 0x2000 00:22:48.018 nvme0n1 : 1.02 3717.06 14.52 0.00 0.00 34078.15 7369.51 48007.09 00:22:48.018 =================================================================================================================== 00:22:48.018 Total : 3717.06 14.52 0.00 0.00 34078.15 7369.51 48007.09 00:22:48.018 0 00:22:48.018 14:05:27 -- target/tls.sh@234 -- # killprocess 80552 00:22:48.018 14:05:27 -- common/autotest_common.sh@936 -- # '[' -z 80552 ']' 00:22:48.018 14:05:27 -- common/autotest_common.sh@940 -- # kill -0 80552 00:22:48.018 14:05:27 -- common/autotest_common.sh@941 -- # uname 00:22:48.018 14:05:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:48.018 14:05:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80552 00:22:48.018 14:05:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:48.018 killing process with pid 80552 00:22:48.018 Received shutdown signal, test time was about 1.000000 seconds 00:22:48.018 00:22:48.018 Latency(us) 00:22:48.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.018 =================================================================================================================== 00:22:48.018 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:48.018 14:05:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:48.018 14:05:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80552' 00:22:48.018 14:05:27 -- common/autotest_common.sh@955 -- # kill 80552 00:22:48.018 14:05:27 -- common/autotest_common.sh@960 -- # wait 80552 00:22:49.392 14:05:28 -- target/tls.sh@235 -- # killprocess 80450 00:22:49.392 14:05:28 -- common/autotest_common.sh@936 -- # '[' -z 80450 ']' 00:22:49.392 14:05:28 -- common/autotest_common.sh@940 -- # kill -0 80450 00:22:49.392 14:05:28 -- common/autotest_common.sh@941 -- # uname 00:22:49.392 14:05:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:49.392 14:05:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80450 00:22:49.392 killing process with pid 80450 00:22:49.392 14:05:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:49.392 14:05:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:49.392 14:05:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80450' 00:22:49.392 14:05:28 -- common/autotest_common.sh@955 -- # kill 80450 00:22:49.392 [2024-04-26 14:05:28.785573] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:49.392 14:05:28 -- common/autotest_common.sh@960 -- # wait 80450 00:22:50.768 14:05:30 -- target/tls.sh@238 -- # nvmfappstart 00:22:50.768 14:05:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:50.768 14:05:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:50.768 14:05:30 -- common/autotest_common.sh@10 -- # set +x 00:22:50.768 14:05:30 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:50.768 14:05:30 -- nvmf/common.sh@470 -- # nvmfpid=80652 00:22:50.768 14:05:30 -- nvmf/common.sh@471 -- # waitforlisten 80652 00:22:50.768 14:05:30 -- common/autotest_common.sh@817 -- # '[' -z 80652 ']' 00:22:50.768 14:05:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.768 14:05:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:50.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.768 14:05:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.768 14:05:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:50.768 14:05:30 -- common/autotest_common.sh@10 -- # set +x 00:22:50.768 [2024-04-26 14:05:30.308532] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:50.768 [2024-04-26 14:05:30.308647] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.027 [2024-04-26 14:05:30.482212] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.286 [2024-04-26 14:05:30.718265] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.286 [2024-04-26 14:05:30.718322] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.286 [2024-04-26 14:05:30.718338] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.286 [2024-04-26 14:05:30.718360] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.286 [2024-04-26 14:05:30.718372] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.286 [2024-04-26 14:05:30.718412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:51.553 14:05:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:51.554 14:05:31 -- common/autotest_common.sh@850 -- # return 0 00:22:51.554 14:05:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:51.554 14:05:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:51.554 14:05:31 -- common/autotest_common.sh@10 -- # set +x 00:22:51.554 14:05:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.554 14:05:31 -- target/tls.sh@239 -- # rpc_cmd 00:22:51.554 14:05:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.554 14:05:31 -- common/autotest_common.sh@10 -- # set +x 00:22:51.554 [2024-04-26 14:05:31.191759] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.815 malloc0 00:22:51.815 [2024-04-26 14:05:31.259171] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:51.815 [2024-04-26 14:05:31.259386] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.815 14:05:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.815 14:05:31 -- target/tls.sh@252 -- # bdevperf_pid=80703 00:22:51.815 14:05:31 -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:51.815 14:05:31 -- target/tls.sh@254 -- # waitforlisten 80703 /var/tmp/bdevperf.sock 00:22:51.815 14:05:31 -- common/autotest_common.sh@817 -- # '[' -z 80703 ']' 00:22:51.815 14:05:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.815 14:05:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:51.815 14:05:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.815 14:05:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:51.815 14:05:31 -- common/autotest_common.sh@10 -- # set +x 00:22:51.815 [2024-04-26 14:05:31.378679] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:51.815 [2024-04-26 14:05:31.378820] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80703 ] 00:22:52.074 [2024-04-26 14:05:31.547005] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.332 [2024-04-26 14:05:31.781078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.590 14:05:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:52.590 14:05:32 -- common/autotest_common.sh@850 -- # return 0 00:22:52.590 14:05:32 -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ITmYqK4ymx 00:22:52.848 14:05:32 -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:53.106 [2024-04-26 14:05:32.545901] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.106 nvme0n1 00:22:53.106 14:05:32 -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:53.106 Running I/O for 1 seconds... 00:22:54.483 00:22:54.483 Latency(us) 00:22:54.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.483 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:54.483 Verification LBA range: start 0x0 length 0x2000 00:22:54.483 nvme0n1 : 1.02 4377.83 17.10 0.00 0.00 28930.90 6895.76 18107.94 00:22:54.483 =================================================================================================================== 00:22:54.483 Total : 4377.83 17.10 0.00 0.00 28930.90 6895.76 18107.94 00:22:54.483 0 00:22:54.483 14:05:33 -- target/tls.sh@263 -- # rpc_cmd save_config 00:22:54.483 14:05:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:54.483 14:05:33 -- common/autotest_common.sh@10 -- # set +x 00:22:54.483 14:05:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:54.483 14:05:33 -- target/tls.sh@263 -- # tgtcfg='{ 00:22:54.483 "subsystems": [ 00:22:54.483 { 00:22:54.483 "subsystem": "keyring", 00:22:54.483 "config": [ 00:22:54.483 { 00:22:54.483 "method": "keyring_file_add_key", 00:22:54.483 "params": { 00:22:54.483 "name": "key0", 00:22:54.483 "path": "/tmp/tmp.ITmYqK4ymx" 00:22:54.483 } 00:22:54.483 } 00:22:54.483 ] 00:22:54.483 }, 00:22:54.483 { 00:22:54.483 "subsystem": "iobuf", 00:22:54.483 "config": [ 00:22:54.483 { 00:22:54.483 "method": "iobuf_set_options", 00:22:54.483 "params": { 00:22:54.483 "large_bufsize": 135168, 00:22:54.483 "large_pool_count": 1024, 00:22:54.483 "small_bufsize": 8192, 00:22:54.483 "small_pool_count": 8192 00:22:54.483 } 00:22:54.483 } 00:22:54.483 ] 00:22:54.483 }, 00:22:54.483 { 00:22:54.483 "subsystem": "sock", 00:22:54.483 "config": [ 00:22:54.483 { 00:22:54.483 "method": "sock_impl_set_options", 00:22:54.483 "params": { 00:22:54.483 "enable_ktls": false, 00:22:54.483 "enable_placement_id": 0, 00:22:54.483 "enable_quickack": false, 00:22:54.483 "enable_recv_pipe": true, 00:22:54.483 "enable_zerocopy_send_client": false, 00:22:54.483 "enable_zerocopy_send_server": true, 00:22:54.483 "impl_name": "posix", 00:22:54.483 "recv_buf_size": 2097152, 00:22:54.483 "send_buf_size": 2097152, 00:22:54.483 "tls_version": 0, 00:22:54.483 "zerocopy_threshold": 0 00:22:54.483 } 00:22:54.483 }, 00:22:54.483 { 00:22:54.483 "method": "sock_impl_set_options", 00:22:54.483 "params": { 00:22:54.483 "enable_ktls": false, 00:22:54.483 "enable_placement_id": 0, 00:22:54.483 "enable_quickack": false, 00:22:54.483 "enable_recv_pipe": true, 00:22:54.483 "enable_zerocopy_send_client": false, 00:22:54.483 "enable_zerocopy_send_server": true, 00:22:54.483 "impl_name": "ssl", 00:22:54.484 "recv_buf_size": 4096, 00:22:54.484 "send_buf_size": 4096, 00:22:54.484 "tls_version": 0, 00:22:54.484 "zerocopy_threshold": 0 00:22:54.484 } 00:22:54.484 } 00:22:54.484 ] 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "subsystem": "vmd", 00:22:54.484 "config": [] 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "subsystem": "accel", 00:22:54.484 "config": [ 00:22:54.484 { 00:22:54.484 "method": "accel_set_options", 00:22:54.484 "params": { 00:22:54.484 "buf_count": 2048, 00:22:54.484 "large_cache_size": 16, 00:22:54.484 "sequence_count": 2048, 00:22:54.484 "small_cache_size": 128, 00:22:54.484 "task_count": 2048 00:22:54.484 } 00:22:54.484 } 00:22:54.484 ] 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "subsystem": "bdev", 00:22:54.484 "config": [ 00:22:54.484 { 00:22:54.484 "method": "bdev_set_options", 00:22:54.484 "params": { 00:22:54.484 "bdev_auto_examine": true, 00:22:54.484 "bdev_io_cache_size": 256, 00:22:54.484 "bdev_io_pool_size": 65535, 00:22:54.484 "iobuf_large_cache_size": 16, 00:22:54.484 "iobuf_small_cache_size": 128 00:22:54.484 } 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "method": "bdev_raid_set_options", 00:22:54.484 "params": { 00:22:54.484 "process_window_size_kb": 1024 00:22:54.484 } 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "method": "bdev_iscsi_set_options", 00:22:54.484 "params": { 00:22:54.484 "timeout_sec": 30 00:22:54.484 } 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "method": "bdev_nvme_set_options", 00:22:54.484 "params": { 00:22:54.484 "action_on_timeout": "none", 00:22:54.484 "allow_accel_sequence": false, 00:22:54.484 "arbitration_burst": 0, 00:22:54.484 "bdev_retry_count": 3, 00:22:54.484 "ctrlr_loss_timeout_sec": 0, 00:22:54.484 "delay_cmd_submit": true, 00:22:54.484 "dhchap_dhgroups": [ 00:22:54.484 "null", 00:22:54.484 "ffdhe2048", 00:22:54.484 "ffdhe3072", 00:22:54.484 "ffdhe4096", 00:22:54.484 "ffdhe6144", 00:22:54.484 "ffdhe8192" 00:22:54.484 ], 00:22:54.484 "dhchap_digests": [ 00:22:54.484 "sha256", 00:22:54.484 "sha384", 00:22:54.484 "sha512" 00:22:54.484 ], 00:22:54.484 "disable_auto_failback": false, 00:22:54.484 "fast_io_fail_timeout_sec": 0, 00:22:54.484 "generate_uuids": false, 00:22:54.484 "high_priority_weight": 0, 00:22:54.484 "io_path_stat": false, 00:22:54.484 "io_queue_requests": 0, 00:22:54.484 "keep_alive_timeout_ms": 10000, 00:22:54.484 "low_priority_weight": 0, 00:22:54.484 "medium_priority_weight": 0, 00:22:54.484 "nvme_adminq_poll_period_us": 10000, 00:22:54.484 "nvme_error_stat": false, 00:22:54.484 "nvme_ioq_poll_period_us": 0, 00:22:54.484 "rdma_cm_event_timeout_ms": 0, 00:22:54.484 "rdma_max_cq_size": 0, 00:22:54.484 "rdma_srq_size": 0, 00:22:54.484 "reconnect_delay_sec": 0, 00:22:54.484 "timeout_admin_us": 0, 00:22:54.484 "timeout_us": 0, 00:22:54.484 "transport_ack_timeout": 0, 00:22:54.484 "transport_retry_count": 4, 00:22:54.484 "transport_tos": 0 00:22:54.484 } 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "method": "bdev_nvme_set_hotplug", 00:22:54.484 "params": { 00:22:54.484 "enable": false, 00:22:54.484 "period_us": 100000 00:22:54.484 } 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "method": "bdev_malloc_create", 00:22:54.484 "params": { 00:22:54.484 "block_size": 4096, 00:22:54.484 "name": "malloc0", 00:22:54.484 "num_blocks": 8192, 00:22:54.484 "optimal_io_boundary": 0, 00:22:54.484 "physical_block_size": 4096, 00:22:54.484 "uuid": "483ee465-4cdd-48f9-86fe-9b9a7286941f" 00:22:54.484 } 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "method": "bdev_wait_for_examine" 00:22:54.484 } 00:22:54.484 ] 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "subsystem": "nbd", 00:22:54.484 "config": [] 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "subsystem": "scheduler", 00:22:54.484 "config": [ 00:22:54.484 { 00:22:54.484 "method": "framework_set_scheduler", 00:22:54.484 "params": { 00:22:54.484 "name": "static" 00:22:54.484 } 00:22:54.484 } 00:22:54.484 ] 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "subsystem": "nvmf", 00:22:54.484 "config": [ 00:22:54.484 { 00:22:54.484 "method": "nvmf_set_config", 00:22:54.484 "params": { 00:22:54.484 "admin_cmd_passthru": { 00:22:54.484 "identify_ctrlr": false 00:22:54.484 }, 00:22:54.484 "discovery_filter": "match_any" 00:22:54.484 } 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "method": "nvmf_set_max_subsystems", 00:22:54.484 "params": { 00:22:54.484 "max_subsystems": 1024 00:22:54.484 } 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "method": "nvmf_set_crdt", 00:22:54.484 "params": { 00:22:54.484 "crdt1": 0, 00:22:54.484 "crdt2": 0, 00:22:54.484 "crdt3": 0 00:22:54.484 } 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "method": "nvmf_create_transport", 00:22:54.484 "params": { 00:22:54.484 "abort_timeout_sec": 1, 00:22:54.484 "ack_timeout": 0, 00:22:54.484 "buf_cache_size": 4294967295, 00:22:54.484 "c2h_success": false, 00:22:54.484 "data_wr_pool_size": 0, 00:22:54.484 "dif_insert_or_strip": false, 00:22:54.484 "in_capsule_data_size": 4096, 00:22:54.484 "io_unit_size": 131072, 00:22:54.484 "max_aq_depth": 128, 00:22:54.484 "max_io_qpairs_per_ctrlr": 127, 00:22:54.484 "max_io_size": 131072, 00:22:54.484 "max_queue_depth": 128, 00:22:54.484 "num_shared_buffers": 511, 00:22:54.484 "sock_priority": 0, 00:22:54.484 "trtype": "TCP", 00:22:54.484 "zcopy": false 00:22:54.484 } 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "method": "nvmf_create_subsystem", 00:22:54.484 "params": { 00:22:54.484 "allow_any_host": false, 00:22:54.484 "ana_reporting": false, 00:22:54.484 "max_cntlid": 65519, 00:22:54.484 "max_namespaces": 32, 00:22:54.484 "min_cntlid": 1, 00:22:54.484 "model_number": "SPDK bdev Controller", 00:22:54.484 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.484 "serial_number": "00000000000000000000" 00:22:54.484 } 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "method": "nvmf_subsystem_add_host", 00:22:54.484 "params": { 00:22:54.484 "host": "nqn.2016-06.io.spdk:host1", 00:22:54.484 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.484 "psk": "key0" 00:22:54.484 } 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "method": "nvmf_subsystem_add_ns", 00:22:54.484 "params": { 00:22:54.484 "namespace": { 00:22:54.484 "bdev_name": "malloc0", 00:22:54.484 "nguid": "483EE4654CDD48F986FE9B9A7286941F", 00:22:54.484 "no_auto_visible": false, 00:22:54.484 "nsid": 1, 00:22:54.484 "uuid": "483ee465-4cdd-48f9-86fe-9b9a7286941f" 00:22:54.484 }, 00:22:54.484 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:54.484 } 00:22:54.484 }, 00:22:54.484 { 00:22:54.484 "method": "nvmf_subsystem_add_listener", 00:22:54.484 "params": { 00:22:54.484 "listen_address": { 00:22:54.484 "adrfam": "IPv4", 00:22:54.484 "traddr": "10.0.0.2", 00:22:54.484 "trsvcid": "4420", 00:22:54.484 "trtype": "TCP" 00:22:54.484 }, 00:22:54.484 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.484 "secure_channel": true 00:22:54.484 } 00:22:54.484 } 00:22:54.484 ] 00:22:54.484 } 00:22:54.484 ] 00:22:54.484 }' 00:22:54.484 14:05:33 -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:54.744 14:05:34 -- target/tls.sh@264 -- # bperfcfg='{ 00:22:54.744 "subsystems": [ 00:22:54.744 { 00:22:54.744 "subsystem": "keyring", 00:22:54.744 "config": [ 00:22:54.744 { 00:22:54.744 "method": "keyring_file_add_key", 00:22:54.744 "params": { 00:22:54.744 "name": "key0", 00:22:54.744 "path": "/tmp/tmp.ITmYqK4ymx" 00:22:54.744 } 00:22:54.744 } 00:22:54.744 ] 00:22:54.744 }, 00:22:54.744 { 00:22:54.744 "subsystem": "iobuf", 00:22:54.744 "config": [ 00:22:54.744 { 00:22:54.744 "method": "iobuf_set_options", 00:22:54.744 "params": { 00:22:54.744 "large_bufsize": 135168, 00:22:54.744 "large_pool_count": 1024, 00:22:54.744 "small_bufsize": 8192, 00:22:54.744 "small_pool_count": 8192 00:22:54.744 } 00:22:54.744 } 00:22:54.744 ] 00:22:54.744 }, 00:22:54.744 { 00:22:54.744 "subsystem": "sock", 00:22:54.744 "config": [ 00:22:54.744 { 00:22:54.744 "method": "sock_impl_set_options", 00:22:54.744 "params": { 00:22:54.744 "enable_ktls": false, 00:22:54.744 "enable_placement_id": 0, 00:22:54.744 "enable_quickack": false, 00:22:54.744 "enable_recv_pipe": true, 00:22:54.744 "enable_zerocopy_send_client": false, 00:22:54.744 "enable_zerocopy_send_server": true, 00:22:54.744 "impl_name": "posix", 00:22:54.744 "recv_buf_size": 2097152, 00:22:54.744 "send_buf_size": 2097152, 00:22:54.744 "tls_version": 0, 00:22:54.744 "zerocopy_threshold": 0 00:22:54.744 } 00:22:54.744 }, 00:22:54.744 { 00:22:54.744 "method": "sock_impl_set_options", 00:22:54.744 "params": { 00:22:54.744 "enable_ktls": false, 00:22:54.744 "enable_placement_id": 0, 00:22:54.744 "enable_quickack": false, 00:22:54.744 "enable_recv_pipe": true, 00:22:54.744 "enable_zerocopy_send_client": false, 00:22:54.744 "enable_zerocopy_send_server": true, 00:22:54.744 "impl_name": "ssl", 00:22:54.744 "recv_buf_size": 4096, 00:22:54.744 "send_buf_size": 4096, 00:22:54.744 "tls_version": 0, 00:22:54.744 "zerocopy_threshold": 0 00:22:54.744 } 00:22:54.744 } 00:22:54.744 ] 00:22:54.744 }, 00:22:54.744 { 00:22:54.744 "subsystem": "vmd", 00:22:54.744 "config": [] 00:22:54.744 }, 00:22:54.744 { 00:22:54.744 "subsystem": "accel", 00:22:54.744 "config": [ 00:22:54.744 { 00:22:54.744 "method": "accel_set_options", 00:22:54.744 "params": { 00:22:54.744 "buf_count": 2048, 00:22:54.744 "large_cache_size": 16, 00:22:54.744 "sequence_count": 2048, 00:22:54.744 "small_cache_size": 128, 00:22:54.744 "task_count": 2048 00:22:54.744 } 00:22:54.744 } 00:22:54.744 ] 00:22:54.744 }, 00:22:54.744 { 00:22:54.744 "subsystem": "bdev", 00:22:54.744 "config": [ 00:22:54.744 { 00:22:54.744 "method": "bdev_set_options", 00:22:54.744 "params": { 00:22:54.744 "bdev_auto_examine": true, 00:22:54.744 "bdev_io_cache_size": 256, 00:22:54.744 "bdev_io_pool_size": 65535, 00:22:54.744 "iobuf_large_cache_size": 16, 00:22:54.744 "iobuf_small_cache_size": 128 00:22:54.744 } 00:22:54.744 }, 00:22:54.744 { 00:22:54.744 "method": "bdev_raid_set_options", 00:22:54.744 "params": { 00:22:54.744 "process_window_size_kb": 1024 00:22:54.744 } 00:22:54.744 }, 00:22:54.744 { 00:22:54.744 "method": "bdev_iscsi_set_options", 00:22:54.744 "params": { 00:22:54.744 "timeout_sec": 30 00:22:54.744 } 00:22:54.744 }, 00:22:54.744 { 00:22:54.744 "method": "bdev_nvme_set_options", 00:22:54.744 "params": { 00:22:54.744 "action_on_timeout": "none", 00:22:54.744 "allow_accel_sequence": false, 00:22:54.744 "arbitration_burst": 0, 00:22:54.744 "bdev_retry_count": 3, 00:22:54.744 "ctrlr_loss_timeout_sec": 0, 00:22:54.744 "delay_cmd_submit": true, 00:22:54.744 "dhchap_dhgroups": [ 00:22:54.744 "null", 00:22:54.744 "ffdhe2048", 00:22:54.744 "ffdhe3072", 00:22:54.744 "ffdhe4096", 00:22:54.744 "ffdhe6144", 00:22:54.744 "ffdhe8192" 00:22:54.744 ], 00:22:54.744 "dhchap_digests": [ 00:22:54.744 "sha256", 00:22:54.744 "sha384", 00:22:54.744 "sha512" 00:22:54.744 ], 00:22:54.744 "disable_auto_failback": false, 00:22:54.744 "fast_io_fail_timeout_sec": 0, 00:22:54.744 "generate_uuids": false, 00:22:54.744 "high_priority_weight": 0, 00:22:54.744 "io_path_stat": false, 00:22:54.744 "io_queue_requests": 512, 00:22:54.744 "keep_alive_timeout_ms": 10000, 00:22:54.744 "low_priority_weight": 0, 00:22:54.744 "medium_priority_weight": 0, 00:22:54.744 "nvme_adminq_poll_period_us": 10000, 00:22:54.744 "nvme_error_stat": false, 00:22:54.744 "nvme_ioq_poll_period_us": 0, 00:22:54.744 "rdma_cm_event_timeout_ms": 0, 00:22:54.744 "rdma_max_cq_size": 0, 00:22:54.744 "rdma_srq_size": 0, 00:22:54.744 "reconnect_delay_sec": 0, 00:22:54.744 "timeout_admin_us": 0, 00:22:54.744 "timeout_us": 0, 00:22:54.744 "transport_ack_timeout": 0, 00:22:54.744 "transport_retry_count": 4, 00:22:54.744 "transport_tos": 0 00:22:54.744 } 00:22:54.744 }, 00:22:54.744 { 00:22:54.744 "method": "bdev_nvme_attach_controller", 00:22:54.744 "params": { 00:22:54.744 "adrfam": "IPv4", 00:22:54.744 "ctrlr_loss_timeout_sec": 0, 00:22:54.744 "ddgst": false, 00:22:54.744 "fast_io_fail_timeout_sec": 0, 00:22:54.744 "hdgst": false, 00:22:54.744 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.744 "name": "nvme0", 00:22:54.744 "prchk_guard": false, 00:22:54.744 "prchk_reftag": false, 00:22:54.744 "psk": "key0", 00:22:54.744 "reconnect_delay_sec": 0, 00:22:54.744 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.744 "traddr": "10.0.0.2", 00:22:54.744 "trsvcid": "4420", 00:22:54.744 "trtype": "TCP" 00:22:54.744 } 00:22:54.744 }, 00:22:54.744 { 00:22:54.744 "method": "bdev_nvme_set_hotplug", 00:22:54.744 "params": { 00:22:54.744 "enable": false, 00:22:54.744 "period_us": 100000 00:22:54.744 } 00:22:54.744 }, 00:22:54.744 { 00:22:54.744 "method": "bdev_enable_histogram", 00:22:54.744 "params": { 00:22:54.744 "enable": true, 00:22:54.744 "name": "nvme0n1" 00:22:54.744 } 00:22:54.744 }, 00:22:54.744 { 00:22:54.744 "method": "bdev_wait_for_examine" 00:22:54.744 } 00:22:54.744 ] 00:22:54.744 }, 00:22:54.744 { 00:22:54.744 "subsystem": "nbd", 00:22:54.744 "config": [] 00:22:54.744 } 00:22:54.744 ] 00:22:54.744 }' 00:22:54.744 14:05:34 -- target/tls.sh@266 -- # killprocess 80703 00:22:54.744 14:05:34 -- common/autotest_common.sh@936 -- # '[' -z 80703 ']' 00:22:54.744 14:05:34 -- common/autotest_common.sh@940 -- # kill -0 80703 00:22:54.744 14:05:34 -- common/autotest_common.sh@941 -- # uname 00:22:54.744 14:05:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:54.744 14:05:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80703 00:22:54.745 14:05:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:54.745 14:05:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:54.745 killing process with pid 80703 00:22:54.745 14:05:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80703' 00:22:54.745 Received shutdown signal, test time was about 1.000000 seconds 00:22:54.745 00:22:54.745 Latency(us) 00:22:54.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.745 =================================================================================================================== 00:22:54.745 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.745 14:05:34 -- common/autotest_common.sh@955 -- # kill 80703 00:22:54.745 14:05:34 -- common/autotest_common.sh@960 -- # wait 80703 00:22:56.123 14:05:35 -- target/tls.sh@267 -- # killprocess 80652 00:22:56.123 14:05:35 -- common/autotest_common.sh@936 -- # '[' -z 80652 ']' 00:22:56.123 14:05:35 -- common/autotest_common.sh@940 -- # kill -0 80652 00:22:56.123 14:05:35 -- common/autotest_common.sh@941 -- # uname 00:22:56.123 14:05:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:56.123 14:05:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80652 00:22:56.123 14:05:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:56.123 killing process with pid 80652 00:22:56.123 14:05:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:56.123 14:05:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80652' 00:22:56.123 14:05:35 -- common/autotest_common.sh@955 -- # kill 80652 00:22:56.123 14:05:35 -- common/autotest_common.sh@960 -- # wait 80652 00:22:57.500 14:05:36 -- target/tls.sh@269 -- # echo '{ 00:22:57.500 "subsystems": [ 00:22:57.500 { 00:22:57.500 "subsystem": "keyring", 00:22:57.500 "config": [ 00:22:57.500 { 00:22:57.500 "method": "keyring_file_add_key", 00:22:57.500 "params": { 00:22:57.500 "name": "key0", 00:22:57.500 "path": "/tmp/tmp.ITmYqK4ymx" 00:22:57.500 } 00:22:57.500 } 00:22:57.500 ] 00:22:57.500 }, 00:22:57.500 { 00:22:57.500 "subsystem": "iobuf", 00:22:57.500 "config": [ 00:22:57.500 { 00:22:57.500 "method": "iobuf_set_options", 00:22:57.500 "params": { 00:22:57.500 "large_bufsize": 135168, 00:22:57.500 "large_pool_count": 1024, 00:22:57.500 "small_bufsize": 8192, 00:22:57.500 "small_pool_count": 8192 00:22:57.500 } 00:22:57.500 } 00:22:57.500 ] 00:22:57.500 }, 00:22:57.500 { 00:22:57.500 "subsystem": "sock", 00:22:57.500 "config": [ 00:22:57.501 { 00:22:57.501 "method": "sock_impl_set_options", 00:22:57.501 "params": { 00:22:57.501 "enable_ktls": false, 00:22:57.501 "enable_placement_id": 0, 00:22:57.501 "enable_quickack": false, 00:22:57.501 "enable_recv_pipe": true, 00:22:57.501 "enable_zerocopy_send_client": false, 00:22:57.501 "enable_zerocopy_send_server": true, 00:22:57.501 "impl_name": "posix", 00:22:57.501 "recv_buf_size": 2097152, 00:22:57.501 "send_buf_size": 2097152, 00:22:57.501 "tls_version": 0, 00:22:57.501 "zerocopy_threshold": 0 00:22:57.501 } 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "method": "sock_impl_set_options", 00:22:57.501 "params": { 00:22:57.501 "enable_ktls": false, 00:22:57.501 "enable_placement_id": 0, 00:22:57.501 "enable_quickack": false, 00:22:57.501 "enable_recv_pipe": true, 00:22:57.501 "enable_zerocopy_send_client": false, 00:22:57.501 "enable_zerocopy_send_server": true, 00:22:57.501 "impl_name": "ssl", 00:22:57.501 "recv_buf_size": 4096, 00:22:57.501 "send_buf_size": 4096, 00:22:57.501 "tls_version": 0, 00:22:57.501 "zerocopy_threshold": 0 00:22:57.501 } 00:22:57.501 } 00:22:57.501 ] 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "subsystem": "vmd", 00:22:57.501 "config": [] 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "subsystem": "accel", 00:22:57.501 "config": [ 00:22:57.501 { 00:22:57.501 "method": "accel_set_options", 00:22:57.501 "params": { 00:22:57.501 "buf_count": 2048, 00:22:57.501 "large_cache_size": 16, 00:22:57.501 "sequence_count": 2048, 00:22:57.501 "small_cache_size": 128, 00:22:57.501 "task_count": 2048 00:22:57.501 } 00:22:57.501 } 00:22:57.501 ] 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "subsystem": "bdev", 00:22:57.501 "config": [ 00:22:57.501 { 00:22:57.501 "method": "bdev_set_options", 00:22:57.501 "params": { 00:22:57.501 "bdev_auto_examine": true, 00:22:57.501 "bdev_io_cache_size": 256, 00:22:57.501 "bdev_io_pool_size": 65535, 00:22:57.501 "iobuf_large_cache_size": 16, 00:22:57.501 "iobuf_small_cache_size": 128 00:22:57.501 } 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "method": "bdev_raid_set_options", 00:22:57.501 "params": { 00:22:57.501 "process_window_size_kb": 1024 00:22:57.501 } 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "method": "bdev_iscsi_set_options", 00:22:57.501 "params": { 00:22:57.501 "timeout_sec": 30 00:22:57.501 } 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "method": "bdev_nvme_set_options", 00:22:57.501 "params": { 00:22:57.501 "action_on_timeout": "none", 00:22:57.501 "allow_accel_sequence": false, 00:22:57.501 "arbitration_burst": 0, 00:22:57.501 "bdev_retry_count": 3, 00:22:57.501 "ctrlr_loss_timeout_sec": 0, 00:22:57.501 "delay_cmd_submit": true, 00:22:57.501 "dhchap_dhgroups": [ 00:22:57.501 "null", 00:22:57.501 "ffdhe2048", 00:22:57.501 "ffdhe3072", 00:22:57.501 "ffdhe4096", 00:22:57.501 "ffdhe6144", 00:22:57.501 "ffdhe8192" 00:22:57.501 ], 00:22:57.501 "dhchap_digests": [ 00:22:57.501 "sha256", 00:22:57.501 "sha384", 00:22:57.501 "sha512" 00:22:57.501 ], 00:22:57.501 "disable_auto_failback": false, 00:22:57.501 "fast_io_fail_timeout_sec": 0, 00:22:57.501 "generate_uuids": false, 00:22:57.501 "high_priority_weight": 0, 00:22:57.501 "io_path_stat": false, 00:22:57.501 "io_queue_requests": 0, 00:22:57.501 "keep_alive_timeout_ms": 10000, 00:22:57.501 "low_priority_weight": 0, 00:22:57.501 "medium_priority_weight": 0, 00:22:57.501 "nvme_adminq_poll_period_us": 10000, 00:22:57.501 "nvme_error_stat": false, 00:22:57.501 "nvme_ioq_poll_period_us": 0, 00:22:57.501 "rdma_cm_event_timeout_ms": 0, 00:22:57.501 "rdma_max_cq_size": 0, 00:22:57.501 "rdma_srq_size": 0, 00:22:57.501 "reconnect_delay_sec": 0, 00:22:57.501 "timeout_admin_us": 0, 00:22:57.501 "timeout_us": 0, 00:22:57.501 "transport_ack_timeout": 0, 00:22:57.501 "transport_retry_count": 4, 00:22:57.501 "transport_tos": 0 00:22:57.501 } 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "method": "bdev_nvme_set_hotplug", 00:22:57.501 "params": { 00:22:57.501 "enable": false, 00:22:57.501 "period_us": 100000 00:22:57.501 } 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "method": "bdev_malloc_create", 00:22:57.501 "params": { 00:22:57.501 "block_size": 4096, 00:22:57.501 "name": "malloc0", 00:22:57.501 "num_blocks": 8192, 00:22:57.501 "optimal_io_boundary": 0, 00:22:57.501 "physical_block_size": 4096, 00:22:57.501 "uuid": "483ee465-4cdd-48f9-86fe-9b9a7286941f" 00:22:57.501 } 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "method": "bdev_wait_for_examine" 00:22:57.501 } 00:22:57.501 ] 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "subsystem": "nbd", 00:22:57.501 "config": [] 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "subsystem": "scheduler", 00:22:57.501 "config": [ 00:22:57.501 { 00:22:57.501 "method": "framework_set_scheduler", 00:22:57.501 "params": { 00:22:57.501 "name": "static" 00:22:57.501 } 00:22:57.501 } 00:22:57.501 ] 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "subsystem": "nvmf", 00:22:57.501 "config": [ 00:22:57.501 { 00:22:57.501 "method": "nvmf_set_config", 00:22:57.501 "params": { 00:22:57.501 "admin_cmd_passthru": { 00:22:57.501 "identify_ctrlr": false 00:22:57.501 }, 00:22:57.501 "discovery_filter": "match_any" 00:22:57.501 } 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "method": "nvmf_set_max_subsystems", 00:22:57.501 "params": { 00:22:57.501 "max_subsystems": 1024 00:22:57.501 } 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "method": "nvmf_set_crdt", 00:22:57.501 "params": { 00:22:57.501 "crdt1": 0, 00:22:57.501 "crdt2": 0, 00:22:57.501 "crdt3": 0 00:22:57.501 } 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "method": "nvmf_create_transport", 00:22:57.501 "params": { 00:22:57.501 "abort_timeout_sec": 1, 00:22:57.501 "ack_timeout": 0, 00:22:57.501 "buf_cache_size": 4294967295, 00:22:57.501 "c2h_success": false, 00:22:57.501 "data_wr_pool_size": 0, 00:22:57.501 "dif_insert_or_strip": false, 00:22:57.501 "in_capsule_data_size": 4096, 00:22:57.501 "io_unit_size": 131072, 00:22:57.501 "max_aq_depth": 128, 00:22:57.501 "max_io_qpairs_per_ctrlr": 127, 00:22:57.501 "max_io_size": 131072, 00:22:57.501 "max_queue_depth": 128, 00:22:57.501 "num_shared_buffers": 511, 00:22:57.501 "sock_priority": 0, 00:22:57.501 "trtype": "TCP", 00:22:57.501 "zcopy": false 00:22:57.501 } 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "method": "nvmf_create_subsystem", 00:22:57.501 "params": { 00:22:57.501 "allow_any_host": false, 00:22:57.501 "ana_reporting": false, 00:22:57.501 "max_cntlid": 65519, 00:22:57.501 "max_namespaces": 32, 00:22:57.501 "min_cntlid": 1, 00:22:57.501 "model_number": "SPD 14:05:36 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:22:57.501 K bdev Controller", 00:22:57.501 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.501 "serial_number": "00000000000000000000" 00:22:57.501 } 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "method": "nvmf_subsystem_add_host", 00:22:57.501 "params": { 00:22:57.501 "host": "nqn.2016-06.io.spdk:host1", 00:22:57.501 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.501 "psk": "key0" 00:22:57.501 } 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "method": "nvmf_subsystem_add_ns", 00:22:57.501 "params": { 00:22:57.501 "namespace": { 00:22:57.501 "bdev_name": "malloc0", 00:22:57.501 "nguid": "483EE4654CDD48F986FE9B9A7286941F", 00:22:57.501 "no_auto_visible": false, 00:22:57.501 "nsid": 1, 00:22:57.501 "uuid": "483ee465-4cdd-48f9-86fe-9b9a7286941f" 00:22:57.501 }, 00:22:57.501 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:22:57.501 } 00:22:57.501 }, 00:22:57.501 { 00:22:57.501 "method": "nvmf_subsystem_add_listener", 00:22:57.501 "params": { 00:22:57.501 "listen_address": { 00:22:57.501 "adrfam": "IPv4", 00:22:57.501 "traddr": "10.0.0.2", 00:22:57.501 "trsvcid": "4420", 00:22:57.501 "trtype": "TCP" 00:22:57.501 }, 00:22:57.501 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.501 "secure_channel": true 00:22:57.501 } 00:22:57.501 } 00:22:57.501 ] 00:22:57.501 } 00:22:57.501 ] 00:22:57.501 }' 00:22:57.501 14:05:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:57.501 14:05:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:57.501 14:05:36 -- common/autotest_common.sh@10 -- # set +x 00:22:57.501 14:05:36 -- nvmf/common.sh@470 -- # nvmfpid=80814 00:22:57.501 14:05:36 -- nvmf/common.sh@471 -- # waitforlisten 80814 00:22:57.501 14:05:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:57.501 14:05:36 -- common/autotest_common.sh@817 -- # '[' -z 80814 ']' 00:22:57.501 14:05:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.501 14:05:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:57.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.502 14:05:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.502 14:05:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:57.502 14:05:36 -- common/autotest_common.sh@10 -- # set +x 00:22:57.502 [2024-04-26 14:05:37.092599] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:57.502 [2024-04-26 14:05:37.092722] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.760 [2024-04-26 14:05:37.265794] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.019 [2024-04-26 14:05:37.504138] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.019 [2024-04-26 14:05:37.504216] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.019 [2024-04-26 14:05:37.504232] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.019 [2024-04-26 14:05:37.504252] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.019 [2024-04-26 14:05:37.504266] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.019 [2024-04-26 14:05:37.504403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.586 [2024-04-26 14:05:38.072551] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.586 [2024-04-26 14:05:38.104435] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:58.586 [2024-04-26 14:05:38.104694] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.586 14:05:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:58.586 14:05:38 -- common/autotest_common.sh@850 -- # return 0 00:22:58.586 14:05:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:58.586 14:05:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:58.586 14:05:38 -- common/autotest_common.sh@10 -- # set +x 00:22:58.586 14:05:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.586 14:05:38 -- target/tls.sh@272 -- # bdevperf_pid=80858 00:22:58.586 14:05:38 -- target/tls.sh@273 -- # waitforlisten 80858 /var/tmp/bdevperf.sock 00:22:58.586 14:05:38 -- common/autotest_common.sh@817 -- # '[' -z 80858 ']' 00:22:58.586 14:05:38 -- target/tls.sh@270 -- # echo '{ 00:22:58.586 "subsystems": [ 00:22:58.586 { 00:22:58.586 "subsystem": "keyring", 00:22:58.586 "config": [ 00:22:58.586 { 00:22:58.586 "method": "keyring_file_add_key", 00:22:58.586 "params": { 00:22:58.586 "name": "key0", 00:22:58.586 "path": "/tmp/tmp.ITmYqK4ymx" 00:22:58.586 } 00:22:58.586 } 00:22:58.586 ] 00:22:58.586 }, 00:22:58.586 { 00:22:58.586 "subsystem": "iobuf", 00:22:58.586 "config": [ 00:22:58.586 { 00:22:58.586 "method": "iobuf_set_options", 00:22:58.586 "params": { 00:22:58.586 "large_bufsize": 135168, 00:22:58.586 "large_pool_count": 1024, 00:22:58.586 "small_bufsize": 8192, 00:22:58.586 "small_pool_count": 8192 00:22:58.586 } 00:22:58.586 } 00:22:58.586 ] 00:22:58.586 }, 00:22:58.586 { 00:22:58.586 "subsystem": "sock", 00:22:58.586 "config": [ 00:22:58.586 { 00:22:58.586 "method": "sock_impl_set_options", 00:22:58.586 "params": { 00:22:58.586 "enable_ktls": false, 00:22:58.586 "enable_placement_id": 0, 00:22:58.586 "enable_quickack": false, 00:22:58.586 "enable_recv_pipe": true, 00:22:58.586 "enable_zerocopy_send_client": false, 00:22:58.586 "enable_zerocopy_send_server": true, 00:22:58.586 "impl_name": "posix", 00:22:58.586 "recv_buf_size": 2097152, 00:22:58.586 "send_buf_size": 2097152, 00:22:58.586 "tls_version": 0, 00:22:58.586 "zerocopy_threshold": 0 00:22:58.586 } 00:22:58.586 }, 00:22:58.586 { 00:22:58.586 "method": "sock_impl_set_options", 00:22:58.586 "params": { 00:22:58.586 "enable_ktls": false, 00:22:58.586 "enable_placement_id": 0, 00:22:58.586 "enable_quickack": false, 00:22:58.586 "enable_recv_pipe": true, 00:22:58.586 "enable_zerocopy_send_client": false, 00:22:58.586 "enable_zerocopy_send_server": true, 00:22:58.586 "impl_name": "ssl", 00:22:58.586 "recv_buf_size": 4096, 00:22:58.586 "send_buf_size": 4096, 00:22:58.587 "tls_version": 0, 00:22:58.587 "zerocopy_threshold": 0 00:22:58.587 } 00:22:58.587 } 00:22:58.587 ] 00:22:58.587 }, 00:22:58.587 { 00:22:58.587 "subsystem": "vmd", 00:22:58.587 "config": [] 00:22:58.587 }, 00:22:58.587 { 00:22:58.587 "subsystem": "accel", 00:22:58.587 "config": [ 00:22:58.587 { 00:22:58.587 "method": "accel_set_options", 00:22:58.587 "params": { 00:22:58.587 "buf_count": 2048, 00:22:58.587 "large_cache_size": 16, 00:22:58.587 "sequence_count": 2048, 00:22:58.587 "small_cache_size": 128, 00:22:58.587 "task_count": 2048 00:22:58.587 } 00:22:58.587 } 00:22:58.587 ] 00:22:58.587 }, 00:22:58.587 { 00:22:58.587 "subsystem": "bdev", 00:22:58.587 "config": [ 00:22:58.587 { 00:22:58.587 "method": "bdev_set_options", 00:22:58.587 "params": { 00:22:58.587 "bdev_auto_examine": true, 00:22:58.587 "bdev_io_cache_size": 256, 00:22:58.587 "bdev_io_pool_size": 65535, 00:22:58.587 "iobuf_large_cache_size": 16, 00:22:58.587 "iobuf_small_cache_size": 128 00:22:58.587 } 00:22:58.587 }, 00:22:58.587 { 00:22:58.587 "method": "bdev_raid_set_options", 00:22:58.587 "params": { 00:22:58.587 "process_window_size_kb": 1024 00:22:58.587 } 00:22:58.587 }, 00:22:58.587 { 00:22:58.587 "method": "bdev_iscsi_set_options", 00:22:58.587 "params": { 00:22:58.587 "timeout_sec": 30 00:22:58.587 } 00:22:58.587 }, 00:22:58.587 { 00:22:58.587 "method": "bdev_nvme_set_options", 00:22:58.587 "params": { 00:22:58.587 "action_on_timeout": "none", 00:22:58.587 "allow_accel_sequence": false, 00:22:58.587 "arbitration_burst": 0, 00:22:58.587 "bdev_retry_count": 3, 00:22:58.587 "ctrlr_loss_timeout_sec": 0, 00:22:58.587 "delay_cmd_submit": true, 00:22:58.587 "dhchap_dhgroups": [ 00:22:58.587 "null", 00:22:58.587 "ffdhe2048", 00:22:58.587 "ffdhe3072", 00:22:58.587 "ffdhe4096", 00:22:58.587 "ffdhe6144", 00:22:58.587 "ffdhe8192" 00:22:58.587 ], 00:22:58.587 "dhchap_digests": [ 00:22:58.587 "sha256", 00:22:58.587 "sha384", 00:22:58.587 "sha512" 00:22:58.587 ], 00:22:58.587 "disable_auto_failback": false, 00:22:58.587 "fast_io_fail_timeout_sec": 0, 00:22:58.587 "generate_uuids": false, 00:22:58.587 "high_priority_weight": 0, 00:22:58.587 "io_path_stat": false, 00:22:58.587 "io_queue_requests": 512, 00:22:58.587 "keep_alive_timeout_ms": 10000, 00:22:58.587 "low_priority_weight": 0, 00:22:58.587 "medium_priority_weight": 0, 00:22:58.587 "nvme_adminq_poll_period_us": 10000, 00:22:58.587 "nvme_error_stat": false, 00:22:58.587 "nvme_ioq_poll_period_us": 0, 00:22:58.587 "rdma_cm_event_timeout_ms": 0, 00:22:58.587 "rdma_max_cq_size": 0, 00:22:58.587 "rdma_srq_size": 0, 00:22:58.587 "reconnect_delay_sec": 0, 00:22:58.587 "timeout_admin_us": 0, 00:22:58.587 "timeout_us": 0, 00:22:58.587 "transport_ack_timeout": 0, 00:22:58.587 "transport_retry_count": 4, 00:22:58.587 "transport_tos": 0 00:22:58.587 } 00:22:58.587 }, 00:22:58.587 { 00:22:58.587 "method": "bdev_nvme_attach_controller", 00:22:58.587 "params": { 00:22:58.587 "adrfam": "IPv4", 00:22:58.587 "ctrlr_loss_timeout_sec": 0, 00:22:58.587 "ddgst": false, 00:22:58.587 "fast_io_fail_timeout_sec": 0, 00:22:58.587 "hdgst": false, 00:22:58.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:58.587 "name": "nvme0", 00:22:58.587 "prchk_guard": false, 00:22:58.587 "prchk_reftag": false, 00:22:58.587 "psk": "key0", 00:22:58.587 "reconnect_delay_sec": 0, 00:22:58.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.587 "traddr": "10.0.0.2", 00:22:58.587 "trsvcid": "4420", 00:22:58.587 "trtype": "TCP" 00:22:58.587 } 00:22:58.587 }, 00:22:58.587 { 00:22:58.587 "method": "bdev_nvme_set_hotplug", 00:22:58.587 "params": { 00:22:58.587 "enable": false, 00:22:58.587 "period_us": 100000 00:22:58.587 } 00:22:58.587 }, 00:22:58.587 { 00:22:58.587 "method": "bdev_enable_histogram", 00:22:58.587 "params": { 00:22:58.587 "enable": true, 00:22:58.587 "name": "nvme0n1" 00:22:58.587 } 00:22:58.587 }, 00:22:58.587 { 00:22:58.587 "method": "bdev_wait_for_examine" 00:22:58.587 } 00:22:58.587 ] 00:22:58.587 }, 00:22:58.587 { 00:22:58.587 "subsystem": "nbd", 00:22:58.587 "config": [] 00:22:58.587 } 00:22:58.587 ] 00:22:58.587 }' 00:22:58.587 14:05:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.587 14:05:38 -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:58.587 14:05:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:58.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.587 14:05:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.587 14:05:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:58.587 14:05:38 -- common/autotest_common.sh@10 -- # set +x 00:22:58.846 [2024-04-26 14:05:38.306852] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:22:58.846 [2024-04-26 14:05:38.307019] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80858 ] 00:22:58.846 [2024-04-26 14:05:38.477787] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.104 [2024-04-26 14:05:38.729070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.672 [2024-04-26 14:05:39.187887] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.672 14:05:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:59.672 14:05:39 -- common/autotest_common.sh@850 -- # return 0 00:22:59.672 14:05:39 -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:59.672 14:05:39 -- target/tls.sh@275 -- # jq -r '.[].name' 00:22:59.931 14:05:39 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.931 14:05:39 -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:59.931 Running I/O for 1 seconds... 00:23:01.310 00:23:01.310 Latency(us) 00:23:01.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.310 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:01.310 Verification LBA range: start 0x0 length 0x2000 00:23:01.310 nvme0n1 : 1.02 4351.76 17.00 0.00 0.00 29106.56 6185.12 24003.55 00:23:01.310 =================================================================================================================== 00:23:01.310 Total : 4351.76 17.00 0.00 0.00 29106.56 6185.12 24003.55 00:23:01.310 0 00:23:01.310 14:05:40 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:01.310 14:05:40 -- target/tls.sh@279 -- # cleanup 00:23:01.310 14:05:40 -- target/tls.sh@15 -- # process_shm --id 0 00:23:01.310 14:05:40 -- common/autotest_common.sh@794 -- # type=--id 00:23:01.310 14:05:40 -- common/autotest_common.sh@795 -- # id=0 00:23:01.310 14:05:40 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:23:01.310 14:05:40 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:01.310 14:05:40 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:23:01.310 14:05:40 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:23:01.310 14:05:40 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:23:01.310 14:05:40 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:01.310 nvmf_trace.0 00:23:01.310 14:05:40 -- common/autotest_common.sh@809 -- # return 0 00:23:01.310 14:05:40 -- target/tls.sh@16 -- # killprocess 80858 00:23:01.310 14:05:40 -- common/autotest_common.sh@936 -- # '[' -z 80858 ']' 00:23:01.310 14:05:40 -- common/autotest_common.sh@940 -- # kill -0 80858 00:23:01.310 14:05:40 -- common/autotest_common.sh@941 -- # uname 00:23:01.310 14:05:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:01.310 14:05:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80858 00:23:01.310 14:05:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:01.310 14:05:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:01.310 killing process with pid 80858 00:23:01.310 14:05:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80858' 00:23:01.310 14:05:40 -- common/autotest_common.sh@955 -- # kill 80858 00:23:01.310 Received shutdown signal, test time was about 1.000000 seconds 00:23:01.310 00:23:01.311 Latency(us) 00:23:01.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.311 =================================================================================================================== 00:23:01.311 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.311 14:05:40 -- common/autotest_common.sh@960 -- # wait 80858 00:23:02.690 14:05:42 -- target/tls.sh@17 -- # nvmftestfini 00:23:02.690 14:05:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:02.690 14:05:42 -- nvmf/common.sh@117 -- # sync 00:23:02.690 14:05:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:02.690 14:05:42 -- nvmf/common.sh@120 -- # set +e 00:23:02.690 14:05:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:02.690 14:05:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:02.690 rmmod nvme_tcp 00:23:02.690 rmmod nvme_fabrics 00:23:02.690 rmmod nvme_keyring 00:23:02.690 14:05:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:02.690 14:05:42 -- nvmf/common.sh@124 -- # set -e 00:23:02.690 14:05:42 -- nvmf/common.sh@125 -- # return 0 00:23:02.690 14:05:42 -- nvmf/common.sh@478 -- # '[' -n 80814 ']' 00:23:02.690 14:05:42 -- nvmf/common.sh@479 -- # killprocess 80814 00:23:02.690 14:05:42 -- common/autotest_common.sh@936 -- # '[' -z 80814 ']' 00:23:02.690 14:05:42 -- common/autotest_common.sh@940 -- # kill -0 80814 00:23:02.690 14:05:42 -- common/autotest_common.sh@941 -- # uname 00:23:02.690 14:05:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:02.690 14:05:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80814 00:23:02.690 14:05:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:02.690 14:05:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:02.690 14:05:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80814' 00:23:02.690 killing process with pid 80814 00:23:02.690 14:05:42 -- common/autotest_common.sh@955 -- # kill 80814 00:23:02.690 14:05:42 -- common/autotest_common.sh@960 -- # wait 80814 00:23:04.065 14:05:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:04.065 14:05:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:04.065 14:05:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:04.065 14:05:43 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.065 14:05:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:04.065 14:05:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.065 14:05:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.065 14:05:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.065 14:05:43 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:04.065 14:05:43 -- target/tls.sh@18 -- # rm -f /tmp/tmp.cu7t8ZzYHU /tmp/tmp.NMKpduGSAp /tmp/tmp.ITmYqK4ymx 00:23:04.065 00:23:04.065 real 1m48.207s 00:23:04.065 user 2m45.810s 00:23:04.065 sys 0m30.781s 00:23:04.065 ************************************ 00:23:04.065 END TEST nvmf_tls 00:23:04.065 ************************************ 00:23:04.065 14:05:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:04.065 14:05:43 -- common/autotest_common.sh@10 -- # set +x 00:23:04.323 14:05:43 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:04.324 14:05:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:04.324 14:05:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:04.324 14:05:43 -- common/autotest_common.sh@10 -- # set +x 00:23:04.324 ************************************ 00:23:04.324 START TEST nvmf_fips 00:23:04.324 ************************************ 00:23:04.324 14:05:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:04.584 * Looking for test storage... 00:23:04.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:23:04.584 14:05:44 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:04.584 14:05:44 -- nvmf/common.sh@7 -- # uname -s 00:23:04.584 14:05:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.584 14:05:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.584 14:05:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.584 14:05:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.584 14:05:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.584 14:05:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.584 14:05:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.584 14:05:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.584 14:05:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.584 14:05:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.584 14:05:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:23:04.584 14:05:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:23:04.584 14:05:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.584 14:05:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.584 14:05:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:04.584 14:05:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.584 14:05:44 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:04.584 14:05:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.584 14:05:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.584 14:05:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.584 14:05:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.584 14:05:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.584 14:05:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.584 14:05:44 -- paths/export.sh@5 -- # export PATH 00:23:04.584 14:05:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.584 14:05:44 -- nvmf/common.sh@47 -- # : 0 00:23:04.584 14:05:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:04.584 14:05:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:04.584 14:05:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.584 14:05:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.584 14:05:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.584 14:05:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:04.584 14:05:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:04.584 14:05:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:04.584 14:05:44 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:04.584 14:05:44 -- fips/fips.sh@89 -- # check_openssl_version 00:23:04.584 14:05:44 -- fips/fips.sh@83 -- # local target=3.0.0 00:23:04.584 14:05:44 -- fips/fips.sh@85 -- # openssl version 00:23:04.584 14:05:44 -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:04.584 14:05:44 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:04.584 14:05:44 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:04.584 14:05:44 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:04.584 14:05:44 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:04.584 14:05:44 -- scripts/common.sh@333 -- # IFS=.-: 00:23:04.584 14:05:44 -- scripts/common.sh@333 -- # read -ra ver1 00:23:04.584 14:05:44 -- scripts/common.sh@334 -- # IFS=.-: 00:23:04.584 14:05:44 -- scripts/common.sh@334 -- # read -ra ver2 00:23:04.584 14:05:44 -- scripts/common.sh@335 -- # local 'op=>=' 00:23:04.584 14:05:44 -- scripts/common.sh@337 -- # ver1_l=3 00:23:04.584 14:05:44 -- scripts/common.sh@338 -- # ver2_l=3 00:23:04.584 14:05:44 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:04.584 14:05:44 -- scripts/common.sh@341 -- # case "$op" in 00:23:04.584 14:05:44 -- scripts/common.sh@345 -- # : 1 00:23:04.584 14:05:44 -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:04.584 14:05:44 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.584 14:05:44 -- scripts/common.sh@362 -- # decimal 3 00:23:04.584 14:05:44 -- scripts/common.sh@350 -- # local d=3 00:23:04.584 14:05:44 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:04.584 14:05:44 -- scripts/common.sh@352 -- # echo 3 00:23:04.584 14:05:44 -- scripts/common.sh@362 -- # ver1[v]=3 00:23:04.584 14:05:44 -- scripts/common.sh@363 -- # decimal 3 00:23:04.584 14:05:44 -- scripts/common.sh@350 -- # local d=3 00:23:04.584 14:05:44 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:04.584 14:05:44 -- scripts/common.sh@352 -- # echo 3 00:23:04.584 14:05:44 -- scripts/common.sh@363 -- # ver2[v]=3 00:23:04.584 14:05:44 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:04.584 14:05:44 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:04.584 14:05:44 -- scripts/common.sh@361 -- # (( v++ )) 00:23:04.584 14:05:44 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.584 14:05:44 -- scripts/common.sh@362 -- # decimal 0 00:23:04.584 14:05:44 -- scripts/common.sh@350 -- # local d=0 00:23:04.584 14:05:44 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:04.584 14:05:44 -- scripts/common.sh@352 -- # echo 0 00:23:04.584 14:05:44 -- scripts/common.sh@362 -- # ver1[v]=0 00:23:04.584 14:05:44 -- scripts/common.sh@363 -- # decimal 0 00:23:04.584 14:05:44 -- scripts/common.sh@350 -- # local d=0 00:23:04.584 14:05:44 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:04.584 14:05:44 -- scripts/common.sh@352 -- # echo 0 00:23:04.584 14:05:44 -- scripts/common.sh@363 -- # ver2[v]=0 00:23:04.584 14:05:44 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:04.584 14:05:44 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:04.584 14:05:44 -- scripts/common.sh@361 -- # (( v++ )) 00:23:04.584 14:05:44 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.584 14:05:44 -- scripts/common.sh@362 -- # decimal 9 00:23:04.584 14:05:44 -- scripts/common.sh@350 -- # local d=9 00:23:04.584 14:05:44 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:04.584 14:05:44 -- scripts/common.sh@352 -- # echo 9 00:23:04.584 14:05:44 -- scripts/common.sh@362 -- # ver1[v]=9 00:23:04.584 14:05:44 -- scripts/common.sh@363 -- # decimal 0 00:23:04.584 14:05:44 -- scripts/common.sh@350 -- # local d=0 00:23:04.584 14:05:44 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:04.584 14:05:44 -- scripts/common.sh@352 -- # echo 0 00:23:04.584 14:05:44 -- scripts/common.sh@363 -- # ver2[v]=0 00:23:04.584 14:05:44 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:04.584 14:05:44 -- scripts/common.sh@364 -- # return 0 00:23:04.584 14:05:44 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:04.584 14:05:44 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:04.584 14:05:44 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:04.584 14:05:44 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:04.585 14:05:44 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:04.585 14:05:44 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:04.585 14:05:44 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:04.585 14:05:44 -- fips/fips.sh@113 -- # build_openssl_config 00:23:04.585 14:05:44 -- fips/fips.sh@37 -- # cat 00:23:04.585 14:05:44 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:04.585 14:05:44 -- fips/fips.sh@58 -- # cat - 00:23:04.585 14:05:44 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:04.585 14:05:44 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:04.585 14:05:44 -- fips/fips.sh@116 -- # mapfile -t providers 00:23:04.585 14:05:44 -- fips/fips.sh@116 -- # openssl list -providers 00:23:04.585 14:05:44 -- fips/fips.sh@116 -- # grep name 00:23:04.585 14:05:44 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:04.585 14:05:44 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:04.585 14:05:44 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:04.585 14:05:44 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:04.585 14:05:44 -- common/autotest_common.sh@638 -- # local es=0 00:23:04.585 14:05:44 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:04.585 14:05:44 -- common/autotest_common.sh@626 -- # local arg=openssl 00:23:04.585 14:05:44 -- fips/fips.sh@127 -- # : 00:23:04.585 14:05:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:04.585 14:05:44 -- common/autotest_common.sh@630 -- # type -t openssl 00:23:04.585 14:05:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:04.585 14:05:44 -- common/autotest_common.sh@632 -- # type -P openssl 00:23:04.585 14:05:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:04.585 14:05:44 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:23:04.585 14:05:44 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:23:04.585 14:05:44 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:23:04.844 Error setting digest 00:23:04.844 0072708D647F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:04.844 0072708D647F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:04.844 14:05:44 -- common/autotest_common.sh@641 -- # es=1 00:23:04.844 14:05:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:04.844 14:05:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:04.844 14:05:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:04.844 14:05:44 -- fips/fips.sh@130 -- # nvmftestinit 00:23:04.844 14:05:44 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:04.844 14:05:44 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.844 14:05:44 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:04.844 14:05:44 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:04.844 14:05:44 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:04.844 14:05:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.844 14:05:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:04.844 14:05:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.844 14:05:44 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:04.844 14:05:44 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:04.844 14:05:44 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:04.844 14:05:44 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:04.844 14:05:44 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:04.844 14:05:44 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:04.844 14:05:44 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.844 14:05:44 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.844 14:05:44 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:04.844 14:05:44 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:04.844 14:05:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:04.844 14:05:44 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:04.844 14:05:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:04.844 14:05:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.844 14:05:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:04.844 14:05:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:04.844 14:05:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:04.844 14:05:44 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:04.844 14:05:44 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:04.844 14:05:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:04.844 Cannot find device "nvmf_tgt_br" 00:23:04.844 14:05:44 -- nvmf/common.sh@155 -- # true 00:23:04.844 14:05:44 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:04.844 Cannot find device "nvmf_tgt_br2" 00:23:04.844 14:05:44 -- nvmf/common.sh@156 -- # true 00:23:04.844 14:05:44 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:04.844 14:05:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:04.844 Cannot find device "nvmf_tgt_br" 00:23:04.844 14:05:44 -- nvmf/common.sh@158 -- # true 00:23:04.844 14:05:44 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:04.844 Cannot find device "nvmf_tgt_br2" 00:23:04.844 14:05:44 -- nvmf/common.sh@159 -- # true 00:23:04.844 14:05:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:04.844 14:05:44 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:04.844 14:05:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:04.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:04.844 14:05:44 -- nvmf/common.sh@162 -- # true 00:23:04.844 14:05:44 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:04.844 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:04.844 14:05:44 -- nvmf/common.sh@163 -- # true 00:23:04.844 14:05:44 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:04.844 14:05:44 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:04.844 14:05:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:05.103 14:05:44 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:05.103 14:05:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:05.103 14:05:44 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:05.103 14:05:44 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:05.103 14:05:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:05.103 14:05:44 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:05.103 14:05:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:05.103 14:05:44 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:05.103 14:05:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:05.103 14:05:44 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:05.103 14:05:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:05.103 14:05:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:05.103 14:05:44 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:05.103 14:05:44 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:05.103 14:05:44 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:05.103 14:05:44 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:05.103 14:05:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:05.103 14:05:44 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:05.103 14:05:44 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:05.103 14:05:44 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:05.103 14:05:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:05.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:23:05.103 00:23:05.103 --- 10.0.0.2 ping statistics --- 00:23:05.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.103 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:23:05.103 14:05:44 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:05.103 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:05.103 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:23:05.103 00:23:05.103 --- 10.0.0.3 ping statistics --- 00:23:05.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.103 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:05.103 14:05:44 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:05.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:23:05.103 00:23:05.103 --- 10.0.0.1 ping statistics --- 00:23:05.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.103 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:23:05.103 14:05:44 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.103 14:05:44 -- nvmf/common.sh@422 -- # return 0 00:23:05.103 14:05:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:05.103 14:05:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.103 14:05:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:05.103 14:05:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:05.103 14:05:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.103 14:05:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:05.103 14:05:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:05.103 14:05:44 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:05.103 14:05:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:05.103 14:05:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:05.103 14:05:44 -- common/autotest_common.sh@10 -- # set +x 00:23:05.103 14:05:44 -- nvmf/common.sh@470 -- # nvmfpid=81182 00:23:05.103 14:05:44 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:05.103 14:05:44 -- nvmf/common.sh@471 -- # waitforlisten 81182 00:23:05.103 14:05:44 -- common/autotest_common.sh@817 -- # '[' -z 81182 ']' 00:23:05.103 14:05:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.103 14:05:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:05.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.103 14:05:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.104 14:05:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:05.104 14:05:44 -- common/autotest_common.sh@10 -- # set +x 00:23:05.362 [2024-04-26 14:05:44.874751] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:05.362 [2024-04-26 14:05:44.874877] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.620 [2024-04-26 14:05:45.042627] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.879 [2024-04-26 14:05:45.314731] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.879 [2024-04-26 14:05:45.314788] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.879 [2024-04-26 14:05:45.314805] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.879 [2024-04-26 14:05:45.314817] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.879 [2024-04-26 14:05:45.314830] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.879 [2024-04-26 14:05:45.314864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.138 14:05:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:06.138 14:05:45 -- common/autotest_common.sh@850 -- # return 0 00:23:06.138 14:05:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:06.138 14:05:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:06.138 14:05:45 -- common/autotest_common.sh@10 -- # set +x 00:23:06.138 14:05:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.138 14:05:45 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:06.138 14:05:45 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:06.138 14:05:45 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:06.138 14:05:45 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:06.138 14:05:45 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:06.138 14:05:45 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:06.138 14:05:45 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:06.138 14:05:45 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:06.397 [2024-04-26 14:05:45.983021] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.397 [2024-04-26 14:05:45.998918] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:06.397 [2024-04-26 14:05:45.999199] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.397 [2024-04-26 14:05:46.063717] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:06.397 malloc0 00:23:06.656 14:05:46 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:06.656 14:05:46 -- fips/fips.sh@147 -- # bdevperf_pid=81234 00:23:06.656 14:05:46 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:06.656 14:05:46 -- fips/fips.sh@148 -- # waitforlisten 81234 /var/tmp/bdevperf.sock 00:23:06.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.656 14:05:46 -- common/autotest_common.sh@817 -- # '[' -z 81234 ']' 00:23:06.656 14:05:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.656 14:05:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:06.656 14:05:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.656 14:05:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:06.656 14:05:46 -- common/autotest_common.sh@10 -- # set +x 00:23:06.656 [2024-04-26 14:05:46.218553] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:06.656 [2024-04-26 14:05:46.218675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81234 ] 00:23:06.915 [2024-04-26 14:05:46.394732] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.174 [2024-04-26 14:05:46.629759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.434 14:05:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:07.434 14:05:47 -- common/autotest_common.sh@850 -- # return 0 00:23:07.434 14:05:47 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:07.693 [2024-04-26 14:05:47.235330] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.693 [2024-04-26 14:05:47.235486] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:07.693 TLSTESTn1 00:23:07.693 14:05:47 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:07.953 Running I/O for 10 seconds... 00:23:17.930 00:23:17.930 Latency(us) 00:23:17.930 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.930 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:17.930 Verification LBA range: start 0x0 length 0x2000 00:23:17.930 TLSTESTn1 : 10.03 4299.20 16.79 0.00 0.00 29714.93 7369.51 21687.42 00:23:17.930 =================================================================================================================== 00:23:17.930 Total : 4299.20 16.79 0.00 0.00 29714.93 7369.51 21687.42 00:23:17.930 0 00:23:17.930 14:05:57 -- fips/fips.sh@1 -- # cleanup 00:23:17.930 14:05:57 -- fips/fips.sh@15 -- # process_shm --id 0 00:23:17.930 14:05:57 -- common/autotest_common.sh@794 -- # type=--id 00:23:17.930 14:05:57 -- common/autotest_common.sh@795 -- # id=0 00:23:17.930 14:05:57 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:23:17.930 14:05:57 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:17.930 14:05:57 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:23:17.930 14:05:57 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:23:17.930 14:05:57 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:23:17.930 14:05:57 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:17.930 nvmf_trace.0 00:23:17.930 14:05:57 -- common/autotest_common.sh@809 -- # return 0 00:23:17.930 14:05:57 -- fips/fips.sh@16 -- # killprocess 81234 00:23:17.930 14:05:57 -- common/autotest_common.sh@936 -- # '[' -z 81234 ']' 00:23:17.930 14:05:57 -- common/autotest_common.sh@940 -- # kill -0 81234 00:23:17.930 14:05:57 -- common/autotest_common.sh@941 -- # uname 00:23:17.930 14:05:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:17.930 14:05:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81234 00:23:18.189 14:05:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:23:18.189 14:05:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:23:18.189 14:05:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81234' 00:23:18.189 killing process with pid 81234 00:23:18.189 14:05:57 -- common/autotest_common.sh@955 -- # kill 81234 00:23:18.189 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.189 00:23:18.189 Latency(us) 00:23:18.189 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.189 =================================================================================================================== 00:23:18.189 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.189 [2024-04-26 14:05:57.624865] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:18.189 14:05:57 -- common/autotest_common.sh@960 -- # wait 81234 00:23:19.566 14:05:58 -- fips/fips.sh@17 -- # nvmftestfini 00:23:19.566 14:05:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:19.566 14:05:58 -- nvmf/common.sh@117 -- # sync 00:23:19.566 14:05:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:19.566 14:05:59 -- nvmf/common.sh@120 -- # set +e 00:23:19.566 14:05:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:19.566 14:05:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:19.566 rmmod nvme_tcp 00:23:19.566 rmmod nvme_fabrics 00:23:19.566 rmmod nvme_keyring 00:23:19.566 14:05:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:19.566 14:05:59 -- nvmf/common.sh@124 -- # set -e 00:23:19.566 14:05:59 -- nvmf/common.sh@125 -- # return 0 00:23:19.566 14:05:59 -- nvmf/common.sh@478 -- # '[' -n 81182 ']' 00:23:19.566 14:05:59 -- nvmf/common.sh@479 -- # killprocess 81182 00:23:19.566 14:05:59 -- common/autotest_common.sh@936 -- # '[' -z 81182 ']' 00:23:19.566 14:05:59 -- common/autotest_common.sh@940 -- # kill -0 81182 00:23:19.566 14:05:59 -- common/autotest_common.sh@941 -- # uname 00:23:19.566 14:05:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:19.566 14:05:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81182 00:23:19.566 14:05:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:19.566 14:05:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:19.566 14:05:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81182' 00:23:19.566 killing process with pid 81182 00:23:19.566 14:05:59 -- common/autotest_common.sh@955 -- # kill 81182 00:23:19.566 [2024-04-26 14:05:59.083985] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:19.566 14:05:59 -- common/autotest_common.sh@960 -- # wait 81182 00:23:20.947 14:06:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:20.947 14:06:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:20.947 14:06:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:20.947 14:06:00 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:20.947 14:06:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:20.947 14:06:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.947 14:06:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:20.947 14:06:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.947 14:06:00 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:20.947 14:06:00 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:23:20.947 00:23:20.947 real 0m16.648s 00:23:20.947 user 0m22.263s 00:23:20.947 sys 0m6.074s 00:23:20.947 14:06:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:20.947 14:06:00 -- common/autotest_common.sh@10 -- # set +x 00:23:20.947 ************************************ 00:23:20.947 END TEST nvmf_fips 00:23:20.947 ************************************ 00:23:20.947 14:06:00 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:23:20.947 14:06:00 -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:23:20.947 14:06:00 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:23:20.947 14:06:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:20.947 14:06:00 -- common/autotest_common.sh@10 -- # set +x 00:23:21.216 14:06:00 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:23:21.216 14:06:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:21.216 14:06:00 -- common/autotest_common.sh@10 -- # set +x 00:23:21.216 14:06:00 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:23:21.216 14:06:00 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:21.216 14:06:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:21.216 14:06:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:21.216 14:06:00 -- common/autotest_common.sh@10 -- # set +x 00:23:21.216 ************************************ 00:23:21.216 START TEST nvmf_multicontroller 00:23:21.216 ************************************ 00:23:21.216 14:06:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:21.475 * Looking for test storage... 00:23:21.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:21.475 14:06:00 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:21.475 14:06:00 -- nvmf/common.sh@7 -- # uname -s 00:23:21.475 14:06:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.475 14:06:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.475 14:06:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.475 14:06:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.475 14:06:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.476 14:06:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.476 14:06:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.476 14:06:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.476 14:06:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.476 14:06:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.476 14:06:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:23:21.476 14:06:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:23:21.476 14:06:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.476 14:06:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.476 14:06:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:21.476 14:06:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.476 14:06:00 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:21.476 14:06:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.476 14:06:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.476 14:06:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.476 14:06:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.476 14:06:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.476 14:06:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.476 14:06:00 -- paths/export.sh@5 -- # export PATH 00:23:21.476 14:06:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.476 14:06:00 -- nvmf/common.sh@47 -- # : 0 00:23:21.476 14:06:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:21.476 14:06:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:21.476 14:06:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.476 14:06:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.476 14:06:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.476 14:06:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:21.476 14:06:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:21.476 14:06:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:21.476 14:06:00 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:21.476 14:06:00 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:21.476 14:06:00 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:21.476 14:06:00 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:21.476 14:06:00 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:21.476 14:06:00 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:21.476 14:06:00 -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:21.476 14:06:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:21.476 14:06:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.476 14:06:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:21.476 14:06:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:21.476 14:06:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:21.476 14:06:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.476 14:06:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:21.476 14:06:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.476 14:06:00 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:21.476 14:06:00 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:21.476 14:06:00 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:21.476 14:06:00 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:21.476 14:06:00 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:21.476 14:06:00 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:21.476 14:06:00 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:21.476 14:06:00 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:21.476 14:06:00 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:21.476 14:06:00 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:21.476 14:06:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:21.476 14:06:00 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:21.476 14:06:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:21.476 14:06:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:21.476 14:06:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:21.476 14:06:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:21.476 14:06:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:21.476 14:06:00 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:21.476 14:06:00 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:21.476 14:06:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:21.476 Cannot find device "nvmf_tgt_br" 00:23:21.476 14:06:00 -- nvmf/common.sh@155 -- # true 00:23:21.476 14:06:00 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:21.476 Cannot find device "nvmf_tgt_br2" 00:23:21.476 14:06:01 -- nvmf/common.sh@156 -- # true 00:23:21.476 14:06:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:21.476 14:06:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:21.476 Cannot find device "nvmf_tgt_br" 00:23:21.476 14:06:01 -- nvmf/common.sh@158 -- # true 00:23:21.476 14:06:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:21.476 Cannot find device "nvmf_tgt_br2" 00:23:21.476 14:06:01 -- nvmf/common.sh@159 -- # true 00:23:21.476 14:06:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:21.476 14:06:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:21.476 14:06:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:21.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:21.476 14:06:01 -- nvmf/common.sh@162 -- # true 00:23:21.476 14:06:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:21.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:21.476 14:06:01 -- nvmf/common.sh@163 -- # true 00:23:21.476 14:06:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:21.476 14:06:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:21.476 14:06:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:21.735 14:06:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:21.735 14:06:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:21.735 14:06:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:21.735 14:06:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:21.735 14:06:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:21.735 14:06:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:21.735 14:06:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:21.735 14:06:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:21.735 14:06:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:21.735 14:06:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:21.735 14:06:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:21.735 14:06:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:21.735 14:06:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:21.735 14:06:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:21.735 14:06:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:21.735 14:06:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:21.735 14:06:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:21.735 14:06:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:21.735 14:06:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:21.735 14:06:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:21.735 14:06:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:21.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:21.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:23:21.735 00:23:21.735 --- 10.0.0.2 ping statistics --- 00:23:21.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.735 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:23:21.735 14:06:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:21.735 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:21.735 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:23:21.735 00:23:21.735 --- 10.0.0.3 ping statistics --- 00:23:21.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.735 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:23:21.735 14:06:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:21.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:21.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:23:21.735 00:23:21.735 --- 10.0.0.1 ping statistics --- 00:23:21.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:21.735 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:21.735 14:06:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:21.735 14:06:01 -- nvmf/common.sh@422 -- # return 0 00:23:21.735 14:06:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:21.735 14:06:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:21.735 14:06:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:21.735 14:06:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:21.735 14:06:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:21.735 14:06:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:21.735 14:06:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:21.994 14:06:01 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:21.994 14:06:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:21.994 14:06:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:21.994 14:06:01 -- common/autotest_common.sh@10 -- # set +x 00:23:21.994 14:06:01 -- nvmf/common.sh@470 -- # nvmfpid=81630 00:23:21.994 14:06:01 -- nvmf/common.sh@471 -- # waitforlisten 81630 00:23:21.994 14:06:01 -- common/autotest_common.sh@817 -- # '[' -z 81630 ']' 00:23:21.994 14:06:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.994 14:06:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:21.994 14:06:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.994 14:06:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:21.994 14:06:01 -- common/autotest_common.sh@10 -- # set +x 00:23:21.994 14:06:01 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:21.994 [2024-04-26 14:06:01.535748] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:21.994 [2024-04-26 14:06:01.535899] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.252 [2024-04-26 14:06:01.713728] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:22.511 [2024-04-26 14:06:01.960197] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.511 [2024-04-26 14:06:01.960264] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.511 [2024-04-26 14:06:01.960283] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.511 [2024-04-26 14:06:01.960311] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.511 [2024-04-26 14:06:01.960327] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.511 [2024-04-26 14:06:01.961362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.511 [2024-04-26 14:06:01.961551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.511 [2024-04-26 14:06:01.961600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:22.770 14:06:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:22.770 14:06:02 -- common/autotest_common.sh@850 -- # return 0 00:23:22.770 14:06:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:22.770 14:06:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:22.770 14:06:02 -- common/autotest_common.sh@10 -- # set +x 00:23:23.029 14:06:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.029 14:06:02 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:23.029 14:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.029 14:06:02 -- common/autotest_common.sh@10 -- # set +x 00:23:23.029 [2024-04-26 14:06:02.455729] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.029 14:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.029 14:06:02 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:23.029 14:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.029 14:06:02 -- common/autotest_common.sh@10 -- # set +x 00:23:23.029 Malloc0 00:23:23.029 14:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.029 14:06:02 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:23.029 14:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.029 14:06:02 -- common/autotest_common.sh@10 -- # set +x 00:23:23.029 14:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.029 14:06:02 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:23.029 14:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.029 14:06:02 -- common/autotest_common.sh@10 -- # set +x 00:23:23.029 14:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.029 14:06:02 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:23.029 14:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.029 14:06:02 -- common/autotest_common.sh@10 -- # set +x 00:23:23.029 [2024-04-26 14:06:02.595207] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:23.029 14:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.029 14:06:02 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:23.029 14:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.029 14:06:02 -- common/autotest_common.sh@10 -- # set +x 00:23:23.029 [2024-04-26 14:06:02.607103] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:23.029 14:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.029 14:06:02 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:23.029 14:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.029 14:06:02 -- common/autotest_common.sh@10 -- # set +x 00:23:23.288 Malloc1 00:23:23.288 14:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.288 14:06:02 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:23.288 14:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.288 14:06:02 -- common/autotest_common.sh@10 -- # set +x 00:23:23.288 14:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.289 14:06:02 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:23.289 14:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.289 14:06:02 -- common/autotest_common.sh@10 -- # set +x 00:23:23.289 14:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.289 14:06:02 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:23.289 14:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.289 14:06:02 -- common/autotest_common.sh@10 -- # set +x 00:23:23.289 14:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.289 14:06:02 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:23.289 14:06:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:23.289 14:06:02 -- common/autotest_common.sh@10 -- # set +x 00:23:23.289 14:06:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:23.289 14:06:02 -- host/multicontroller.sh@44 -- # bdevperf_pid=81682 00:23:23.289 14:06:02 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:23.289 14:06:02 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:23.289 14:06:02 -- host/multicontroller.sh@47 -- # waitforlisten 81682 /var/tmp/bdevperf.sock 00:23:23.289 14:06:02 -- common/autotest_common.sh@817 -- # '[' -z 81682 ']' 00:23:23.289 14:06:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:23.289 14:06:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:23.289 14:06:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:23.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:23.289 14:06:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:23.289 14:06:02 -- common/autotest_common.sh@10 -- # set +x 00:23:24.225 14:06:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:24.225 14:06:03 -- common/autotest_common.sh@850 -- # return 0 00:23:24.225 14:06:03 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:24.225 14:06:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.225 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:23:24.225 NVMe0n1 00:23:24.225 14:06:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.225 14:06:03 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.225 14:06:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.225 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:23:24.225 14:06:03 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:24.225 14:06:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.225 1 00:23:24.225 14:06:03 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:24.225 14:06:03 -- common/autotest_common.sh@638 -- # local es=0 00:23:24.225 14:06:03 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:24.225 14:06:03 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:24.225 14:06:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:24.225 14:06:03 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:24.225 14:06:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:24.225 14:06:03 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:24.225 14:06:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.225 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:23:24.225 2024/04/26 14:06:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:24.225 request: 00:23:24.225 { 00:23:24.225 "method": "bdev_nvme_attach_controller", 00:23:24.225 "params": { 00:23:24.225 "name": "NVMe0", 00:23:24.225 "trtype": "tcp", 00:23:24.225 "traddr": "10.0.0.2", 00:23:24.225 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:24.225 "hostaddr": "10.0.0.2", 00:23:24.225 "hostsvcid": "60000", 00:23:24.225 "adrfam": "ipv4", 00:23:24.225 "trsvcid": "4420", 00:23:24.225 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:23:24.225 } 00:23:24.225 } 00:23:24.225 Got JSON-RPC error response 00:23:24.225 GoRPCClient: error on JSON-RPC call 00:23:24.225 14:06:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:24.225 14:06:03 -- common/autotest_common.sh@641 -- # es=1 00:23:24.225 14:06:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:24.225 14:06:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:24.225 14:06:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:24.225 14:06:03 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:24.225 14:06:03 -- common/autotest_common.sh@638 -- # local es=0 00:23:24.225 14:06:03 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:24.225 14:06:03 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:24.225 14:06:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:24.225 14:06:03 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:24.225 14:06:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:24.225 14:06:03 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:24.225 14:06:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.225 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:23:24.225 2024/04/26 14:06:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:24.225 request: 00:23:24.225 { 00:23:24.225 "method": "bdev_nvme_attach_controller", 00:23:24.225 "params": { 00:23:24.225 "name": "NVMe0", 00:23:24.225 "trtype": "tcp", 00:23:24.225 "traddr": "10.0.0.2", 00:23:24.225 "hostaddr": "10.0.0.2", 00:23:24.225 "hostsvcid": "60000", 00:23:24.225 "adrfam": "ipv4", 00:23:24.225 "trsvcid": "4420", 00:23:24.225 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:23:24.225 } 00:23:24.225 } 00:23:24.225 Got JSON-RPC error response 00:23:24.225 GoRPCClient: error on JSON-RPC call 00:23:24.225 14:06:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:24.225 14:06:03 -- common/autotest_common.sh@641 -- # es=1 00:23:24.225 14:06:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:24.225 14:06:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:24.225 14:06:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:24.225 14:06:03 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:24.225 14:06:03 -- common/autotest_common.sh@638 -- # local es=0 00:23:24.225 14:06:03 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:24.225 14:06:03 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:24.225 14:06:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:24.225 14:06:03 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:24.225 14:06:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:24.225 14:06:03 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:24.225 14:06:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.225 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:23:24.225 2024/04/26 14:06:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:23:24.225 request: 00:23:24.225 { 00:23:24.225 "method": "bdev_nvme_attach_controller", 00:23:24.225 "params": { 00:23:24.225 "name": "NVMe0", 00:23:24.225 "trtype": "tcp", 00:23:24.225 "traddr": "10.0.0.2", 00:23:24.225 "hostaddr": "10.0.0.2", 00:23:24.225 "hostsvcid": "60000", 00:23:24.225 "adrfam": "ipv4", 00:23:24.225 "trsvcid": "4420", 00:23:24.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.225 "multipath": "disable" 00:23:24.226 } 00:23:24.226 } 00:23:24.226 Got JSON-RPC error response 00:23:24.226 GoRPCClient: error on JSON-RPC call 00:23:24.226 14:06:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:24.226 14:06:03 -- common/autotest_common.sh@641 -- # es=1 00:23:24.226 14:06:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:24.226 14:06:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:24.226 14:06:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:24.226 14:06:03 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:24.226 14:06:03 -- common/autotest_common.sh@638 -- # local es=0 00:23:24.226 14:06:03 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:24.226 14:06:03 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:24.226 14:06:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:24.226 14:06:03 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:24.226 14:06:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:24.226 14:06:03 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:24.226 14:06:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.226 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:23:24.226 2024/04/26 14:06:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:23:24.226 request: 00:23:24.226 { 00:23:24.226 "method": "bdev_nvme_attach_controller", 00:23:24.226 "params": { 00:23:24.226 "name": "NVMe0", 00:23:24.226 "trtype": "tcp", 00:23:24.226 "traddr": "10.0.0.2", 00:23:24.226 "hostaddr": "10.0.0.2", 00:23:24.226 "hostsvcid": "60000", 00:23:24.226 "adrfam": "ipv4", 00:23:24.226 "trsvcid": "4420", 00:23:24.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.226 "multipath": "failover" 00:23:24.226 } 00:23:24.226 } 00:23:24.226 Got JSON-RPC error response 00:23:24.226 GoRPCClient: error on JSON-RPC call 00:23:24.226 14:06:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:24.226 14:06:03 -- common/autotest_common.sh@641 -- # es=1 00:23:24.226 14:06:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:24.226 14:06:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:24.226 14:06:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:24.485 14:06:03 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:24.485 14:06:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.485 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:23:24.485 00:23:24.485 14:06:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.485 14:06:03 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:24.485 14:06:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.485 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:23:24.485 14:06:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.485 14:06:03 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:24.485 14:06:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.485 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:23:24.485 00:23:24.485 14:06:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.485 14:06:04 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:24.485 14:06:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:24.485 14:06:04 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:24.485 14:06:04 -- common/autotest_common.sh@10 -- # set +x 00:23:24.485 14:06:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:24.485 14:06:04 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:24.485 14:06:04 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:25.861 0 00:23:25.861 14:06:05 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:25.861 14:06:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.861 14:06:05 -- common/autotest_common.sh@10 -- # set +x 00:23:25.861 14:06:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.861 14:06:05 -- host/multicontroller.sh@100 -- # killprocess 81682 00:23:25.861 14:06:05 -- common/autotest_common.sh@936 -- # '[' -z 81682 ']' 00:23:25.861 14:06:05 -- common/autotest_common.sh@940 -- # kill -0 81682 00:23:25.861 14:06:05 -- common/autotest_common.sh@941 -- # uname 00:23:25.861 14:06:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:25.861 14:06:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81682 00:23:25.861 killing process with pid 81682 00:23:25.861 14:06:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:25.861 14:06:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:25.861 14:06:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81682' 00:23:25.861 14:06:05 -- common/autotest_common.sh@955 -- # kill 81682 00:23:25.861 14:06:05 -- common/autotest_common.sh@960 -- # wait 81682 00:23:27.275 14:06:06 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:27.275 14:06:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.275 14:06:06 -- common/autotest_common.sh@10 -- # set +x 00:23:27.275 14:06:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.275 14:06:06 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:27.275 14:06:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.275 14:06:06 -- common/autotest_common.sh@10 -- # set +x 00:23:27.275 14:06:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.275 14:06:06 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:27.275 14:06:06 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:27.275 14:06:06 -- common/autotest_common.sh@1598 -- # read -r file 00:23:27.275 14:06:06 -- common/autotest_common.sh@1597 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:23:27.275 14:06:06 -- common/autotest_common.sh@1597 -- # sort -u 00:23:27.275 14:06:06 -- common/autotest_common.sh@1599 -- # cat 00:23:27.275 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:23:27.275 [2024-04-26 14:06:02.838108] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:27.275 [2024-04-26 14:06:02.838268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81682 ] 00:23:27.275 [2024-04-26 14:06:03.009294] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.275 [2024-04-26 14:06:03.247168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.275 [2024-04-26 14:06:04.044364] bdev.c:4551:bdev_name_add: *ERROR*: Bdev name 2e4516ac-9ea7-4e5e-b299-5a3285a74681 already exists 00:23:27.275 [2024-04-26 14:06:04.044484] bdev.c:7668:bdev_register: *ERROR*: Unable to add uuid:2e4516ac-9ea7-4e5e-b299-5a3285a74681 alias for bdev NVMe1n1 00:23:27.275 [2024-04-26 14:06:04.044528] bdev_nvme.c:4276:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:27.275 Running I/O for 1 seconds... 00:23:27.275 00:23:27.275 Latency(us) 00:23:27.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.275 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:27.275 NVMe0n1 : 1.01 20325.59 79.40 0.00 0.00 6280.79 3500.52 12528.17 00:23:27.275 =================================================================================================================== 00:23:27.275 Total : 20325.59 79.40 0.00 0.00 6280.79 3500.52 12528.17 00:23:27.275 Received shutdown signal, test time was about 1.000000 seconds 00:23:27.275 00:23:27.275 Latency(us) 00:23:27.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.275 =================================================================================================================== 00:23:27.275 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.275 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:23:27.275 14:06:06 -- common/autotest_common.sh@1604 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:27.275 14:06:06 -- common/autotest_common.sh@1598 -- # read -r file 00:23:27.275 14:06:06 -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:27.275 14:06:06 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:27.275 14:06:06 -- nvmf/common.sh@117 -- # sync 00:23:27.275 14:06:06 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:27.275 14:06:06 -- nvmf/common.sh@120 -- # set +e 00:23:27.275 14:06:06 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:27.275 14:06:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:27.275 rmmod nvme_tcp 00:23:27.275 rmmod nvme_fabrics 00:23:27.275 rmmod nvme_keyring 00:23:27.275 14:06:06 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:27.275 14:06:06 -- nvmf/common.sh@124 -- # set -e 00:23:27.275 14:06:06 -- nvmf/common.sh@125 -- # return 0 00:23:27.275 14:06:06 -- nvmf/common.sh@478 -- # '[' -n 81630 ']' 00:23:27.275 14:06:06 -- nvmf/common.sh@479 -- # killprocess 81630 00:23:27.275 14:06:06 -- common/autotest_common.sh@936 -- # '[' -z 81630 ']' 00:23:27.275 14:06:06 -- common/autotest_common.sh@940 -- # kill -0 81630 00:23:27.275 14:06:06 -- common/autotest_common.sh@941 -- # uname 00:23:27.275 14:06:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:27.275 14:06:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81630 00:23:27.275 14:06:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:27.275 14:06:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:27.275 14:06:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81630' 00:23:27.275 killing process with pid 81630 00:23:27.275 14:06:06 -- common/autotest_common.sh@955 -- # kill 81630 00:23:27.275 14:06:06 -- common/autotest_common.sh@960 -- # wait 81630 00:23:29.177 14:06:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:29.177 14:06:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:29.177 14:06:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:29.177 14:06:08 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:29.177 14:06:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:29.177 14:06:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.177 14:06:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.177 14:06:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.177 14:06:08 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:29.177 ************************************ 00:23:29.177 END TEST nvmf_multicontroller 00:23:29.177 ************************************ 00:23:29.177 00:23:29.177 real 0m7.806s 00:23:29.177 user 0m22.961s 00:23:29.177 sys 0m1.621s 00:23:29.177 14:06:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:29.177 14:06:08 -- common/autotest_common.sh@10 -- # set +x 00:23:29.177 14:06:08 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:29.177 14:06:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:29.177 14:06:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:29.177 14:06:08 -- common/autotest_common.sh@10 -- # set +x 00:23:29.177 ************************************ 00:23:29.177 START TEST nvmf_aer 00:23:29.177 ************************************ 00:23:29.177 14:06:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:29.435 * Looking for test storage... 00:23:29.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:29.435 14:06:08 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:29.435 14:06:08 -- nvmf/common.sh@7 -- # uname -s 00:23:29.435 14:06:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.435 14:06:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.435 14:06:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.435 14:06:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.435 14:06:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.435 14:06:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.435 14:06:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.435 14:06:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.435 14:06:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.435 14:06:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.435 14:06:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:23:29.435 14:06:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:23:29.435 14:06:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.435 14:06:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.435 14:06:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:29.435 14:06:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.435 14:06:08 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:29.435 14:06:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.435 14:06:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.435 14:06:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.435 14:06:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.435 14:06:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.435 14:06:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.435 14:06:08 -- paths/export.sh@5 -- # export PATH 00:23:29.435 14:06:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.435 14:06:08 -- nvmf/common.sh@47 -- # : 0 00:23:29.435 14:06:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:29.435 14:06:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:29.435 14:06:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.435 14:06:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.435 14:06:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.435 14:06:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:29.435 14:06:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:29.435 14:06:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:29.435 14:06:08 -- host/aer.sh@11 -- # nvmftestinit 00:23:29.435 14:06:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:29.435 14:06:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.435 14:06:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:29.435 14:06:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:29.435 14:06:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:29.435 14:06:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.435 14:06:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:29.435 14:06:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.435 14:06:08 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:29.435 14:06:08 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:29.435 14:06:08 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:29.435 14:06:08 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:29.435 14:06:08 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:29.435 14:06:08 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:29.435 14:06:08 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:29.435 14:06:08 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:29.435 14:06:08 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:29.435 14:06:08 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:29.435 14:06:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:29.435 14:06:08 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:29.435 14:06:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:29.436 14:06:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:29.436 14:06:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:29.436 14:06:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:29.436 14:06:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:29.436 14:06:08 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:29.436 14:06:08 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:29.436 14:06:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:29.436 Cannot find device "nvmf_tgt_br" 00:23:29.436 14:06:08 -- nvmf/common.sh@155 -- # true 00:23:29.436 14:06:08 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:29.436 Cannot find device "nvmf_tgt_br2" 00:23:29.436 14:06:08 -- nvmf/common.sh@156 -- # true 00:23:29.436 14:06:08 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:29.436 14:06:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:29.436 Cannot find device "nvmf_tgt_br" 00:23:29.436 14:06:08 -- nvmf/common.sh@158 -- # true 00:23:29.436 14:06:08 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:29.436 Cannot find device "nvmf_tgt_br2" 00:23:29.436 14:06:09 -- nvmf/common.sh@159 -- # true 00:23:29.436 14:06:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:29.436 14:06:09 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:29.436 14:06:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:29.436 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.436 14:06:09 -- nvmf/common.sh@162 -- # true 00:23:29.436 14:06:09 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:29.436 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:29.436 14:06:09 -- nvmf/common.sh@163 -- # true 00:23:29.436 14:06:09 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:29.436 14:06:09 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:29.436 14:06:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:29.436 14:06:09 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:29.436 14:06:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:29.693 14:06:09 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:29.693 14:06:09 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:29.693 14:06:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:29.693 14:06:09 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:29.693 14:06:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:29.693 14:06:09 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:29.694 14:06:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:29.694 14:06:09 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:29.694 14:06:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:29.694 14:06:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:29.694 14:06:09 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:29.694 14:06:09 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:29.694 14:06:09 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:29.694 14:06:09 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:29.694 14:06:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:29.694 14:06:09 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:29.694 14:06:09 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:29.694 14:06:09 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:29.694 14:06:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:29.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:23:29.694 00:23:29.694 --- 10.0.0.2 ping statistics --- 00:23:29.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.694 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:23:29.694 14:06:09 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:29.694 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:29.694 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:23:29.694 00:23:29.694 --- 10.0.0.3 ping statistics --- 00:23:29.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.694 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:23:29.694 14:06:09 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:29.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:29.694 00:23:29.694 --- 10.0.0.1 ping statistics --- 00:23:29.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.694 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:29.694 14:06:09 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.694 14:06:09 -- nvmf/common.sh@422 -- # return 0 00:23:29.694 14:06:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:29.694 14:06:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.694 14:06:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:29.694 14:06:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:29.694 14:06:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.694 14:06:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:29.694 14:06:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:29.694 14:06:09 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:29.694 14:06:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:29.694 14:06:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:29.694 14:06:09 -- common/autotest_common.sh@10 -- # set +x 00:23:29.694 14:06:09 -- nvmf/common.sh@470 -- # nvmfpid=81967 00:23:29.694 14:06:09 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:29.694 14:06:09 -- nvmf/common.sh@471 -- # waitforlisten 81967 00:23:29.694 14:06:09 -- common/autotest_common.sh@817 -- # '[' -z 81967 ']' 00:23:29.694 14:06:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.694 14:06:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:29.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.694 14:06:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.694 14:06:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:29.694 14:06:09 -- common/autotest_common.sh@10 -- # set +x 00:23:29.952 [2024-04-26 14:06:09.384772] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:29.952 [2024-04-26 14:06:09.384889] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.952 [2024-04-26 14:06:09.556581] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:30.210 [2024-04-26 14:06:09.800115] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.210 [2024-04-26 14:06:09.800184] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.210 [2024-04-26 14:06:09.800202] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.210 [2024-04-26 14:06:09.800214] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.210 [2024-04-26 14:06:09.800226] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.210 [2024-04-26 14:06:09.800335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.210 [2024-04-26 14:06:09.800487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.210 [2024-04-26 14:06:09.801423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.210 [2024-04-26 14:06:09.801459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.778 14:06:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:30.778 14:06:10 -- common/autotest_common.sh@850 -- # return 0 00:23:30.778 14:06:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:30.778 14:06:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:30.778 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:23:30.778 14:06:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.778 14:06:10 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:30.778 14:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.778 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:23:30.778 [2024-04-26 14:06:10.242279] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.778 14:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.778 14:06:10 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:30.778 14:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.778 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:23:30.778 Malloc0 00:23:30.778 14:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.778 14:06:10 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:30.778 14:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.778 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:23:30.778 14:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.778 14:06:10 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:30.778 14:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.778 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:23:30.778 14:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.778 14:06:10 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:30.778 14:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.778 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:23:30.778 [2024-04-26 14:06:10.382063] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:30.778 14:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.778 14:06:10 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:30.778 14:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.778 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:23:30.778 [2024-04-26 14:06:10.389727] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:23:30.778 [ 00:23:30.778 { 00:23:30.778 "allow_any_host": true, 00:23:30.778 "hosts": [], 00:23:30.778 "listen_addresses": [], 00:23:30.778 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:30.778 "subtype": "Discovery" 00:23:30.778 }, 00:23:30.778 { 00:23:30.778 "allow_any_host": true, 00:23:30.778 "hosts": [], 00:23:30.778 "listen_addresses": [ 00:23:30.778 { 00:23:30.778 "adrfam": "IPv4", 00:23:30.778 "traddr": "10.0.0.2", 00:23:30.778 "transport": "TCP", 00:23:30.778 "trsvcid": "4420", 00:23:30.778 "trtype": "TCP" 00:23:30.778 } 00:23:30.778 ], 00:23:30.778 "max_cntlid": 65519, 00:23:30.778 "max_namespaces": 2, 00:23:30.778 "min_cntlid": 1, 00:23:30.778 "model_number": "SPDK bdev Controller", 00:23:30.778 "namespaces": [ 00:23:30.778 { 00:23:30.778 "bdev_name": "Malloc0", 00:23:30.778 "name": "Malloc0", 00:23:30.778 "nguid": "F073E8A3BBED4A87A3E10B957B3B1DB5", 00:23:30.778 "nsid": 1, 00:23:30.778 "uuid": "f073e8a3-bbed-4a87-a3e1-0b957b3b1db5" 00:23:30.778 } 00:23:30.778 ], 00:23:30.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.778 "serial_number": "SPDK00000000000001", 00:23:30.778 "subtype": "NVMe" 00:23:30.778 } 00:23:30.778 ] 00:23:30.778 14:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.778 14:06:10 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:30.778 14:06:10 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:30.778 14:06:10 -- host/aer.sh@33 -- # aerpid=82027 00:23:30.778 14:06:10 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:30.778 14:06:10 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:30.778 14:06:10 -- common/autotest_common.sh@1251 -- # local i=0 00:23:30.778 14:06:10 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:30.778 14:06:10 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:23:30.778 14:06:10 -- common/autotest_common.sh@1254 -- # i=1 00:23:30.778 14:06:10 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:23:31.036 14:06:10 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:31.036 14:06:10 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:23:31.036 14:06:10 -- common/autotest_common.sh@1254 -- # i=2 00:23:31.036 14:06:10 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:23:31.036 14:06:10 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:31.036 14:06:10 -- common/autotest_common.sh@1253 -- # '[' 2 -lt 200 ']' 00:23:31.036 14:06:10 -- common/autotest_common.sh@1254 -- # i=3 00:23:31.036 14:06:10 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:23:31.295 14:06:10 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:31.295 14:06:10 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:31.295 14:06:10 -- common/autotest_common.sh@1262 -- # return 0 00:23:31.295 14:06:10 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:31.295 14:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.295 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:23:31.295 Malloc1 00:23:31.295 14:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.295 14:06:10 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:31.295 14:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.295 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:23:31.295 14:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.295 14:06:10 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:31.295 14:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.295 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:23:31.295 [ 00:23:31.295 { 00:23:31.295 "allow_any_host": true, 00:23:31.295 "hosts": [], 00:23:31.295 "listen_addresses": [], 00:23:31.295 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:31.295 "subtype": "Discovery" 00:23:31.295 }, 00:23:31.295 { 00:23:31.295 "allow_any_host": true, 00:23:31.295 "hosts": [], 00:23:31.295 "listen_addresses": [ 00:23:31.295 { 00:23:31.295 "adrfam": "IPv4", 00:23:31.295 "traddr": "10.0.0.2", 00:23:31.295 "transport": "TCP", 00:23:31.295 "trsvcid": "4420", 00:23:31.295 "trtype": "TCP" 00:23:31.295 } 00:23:31.295 ], 00:23:31.295 "max_cntlid": 65519, 00:23:31.295 "max_namespaces": 2, 00:23:31.295 "min_cntlid": 1, 00:23:31.295 "model_number": "SPDK bdev Controller", 00:23:31.295 "namespaces": [ 00:23:31.295 { 00:23:31.295 "bdev_name": "Malloc0", 00:23:31.295 "name": "Malloc0", 00:23:31.295 "nguid": "F073E8A3BBED4A87A3E10B957B3B1DB5", 00:23:31.295 "nsid": 1, 00:23:31.295 "uuid": "f073e8a3-bbed-4a87-a3e1-0b957b3b1db5" 00:23:31.295 }, 00:23:31.295 { 00:23:31.295 "bdev_name": "Malloc1", 00:23:31.295 "name": "Malloc1", 00:23:31.295 "nguid": "6475ACB4CBCC4DED94C96DC1D8A30C82", 00:23:31.295 "nsid": 2, 00:23:31.295 "uuid": "6475acb4-cbcc-4ded-94c9-6dc1d8a30c82" 00:23:31.295 } 00:23:31.295 ], 00:23:31.295 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:31.295 "serial_number": "SPDK00000000000001", 00:23:31.295 "subtype": "NVMe" 00:23:31.295 } 00:23:31.295 ] 00:23:31.295 14:06:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.295 14:06:10 -- host/aer.sh@43 -- # wait 82027 00:23:31.295 Asynchronous Event Request test 00:23:31.295 Attaching to 10.0.0.2 00:23:31.295 Attached to 10.0.0.2 00:23:31.295 Registering asynchronous event callbacks... 00:23:31.295 Starting namespace attribute notice tests for all controllers... 00:23:31.295 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:31.295 aer_cb - Changed Namespace 00:23:31.295 Cleaning up... 00:23:31.295 14:06:10 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:31.295 14:06:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.295 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:23:31.554 14:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.554 14:06:11 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:31.554 14:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.554 14:06:11 -- common/autotest_common.sh@10 -- # set +x 00:23:31.814 14:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.814 14:06:11 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:31.814 14:06:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.814 14:06:11 -- common/autotest_common.sh@10 -- # set +x 00:23:31.814 14:06:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.814 14:06:11 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:31.814 14:06:11 -- host/aer.sh@51 -- # nvmftestfini 00:23:31.814 14:06:11 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:31.814 14:06:11 -- nvmf/common.sh@117 -- # sync 00:23:31.814 14:06:11 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:31.814 14:06:11 -- nvmf/common.sh@120 -- # set +e 00:23:31.814 14:06:11 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:31.814 14:06:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:31.814 rmmod nvme_tcp 00:23:31.814 rmmod nvme_fabrics 00:23:31.814 rmmod nvme_keyring 00:23:31.814 14:06:11 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:31.814 14:06:11 -- nvmf/common.sh@124 -- # set -e 00:23:31.814 14:06:11 -- nvmf/common.sh@125 -- # return 0 00:23:31.814 14:06:11 -- nvmf/common.sh@478 -- # '[' -n 81967 ']' 00:23:31.814 14:06:11 -- nvmf/common.sh@479 -- # killprocess 81967 00:23:31.814 14:06:11 -- common/autotest_common.sh@936 -- # '[' -z 81967 ']' 00:23:31.814 14:06:11 -- common/autotest_common.sh@940 -- # kill -0 81967 00:23:31.814 14:06:11 -- common/autotest_common.sh@941 -- # uname 00:23:31.814 14:06:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:31.814 14:06:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81967 00:23:32.072 14:06:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:32.072 killing process with pid 81967 00:23:32.072 14:06:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:32.072 14:06:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81967' 00:23:32.072 14:06:11 -- common/autotest_common.sh@955 -- # kill 81967 00:23:32.072 [2024-04-26 14:06:11.509529] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:23:32.072 14:06:11 -- common/autotest_common.sh@960 -- # wait 81967 00:23:33.499 14:06:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:33.499 14:06:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:33.499 14:06:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:33.499 14:06:12 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.499 14:06:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:33.499 14:06:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.499 14:06:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.499 14:06:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.499 14:06:12 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:33.499 00:23:33.499 real 0m4.085s 00:23:33.499 user 0m10.881s 00:23:33.499 sys 0m0.955s 00:23:33.499 14:06:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:33.499 ************************************ 00:23:33.499 END TEST nvmf_aer 00:23:33.499 ************************************ 00:23:33.499 14:06:12 -- common/autotest_common.sh@10 -- # set +x 00:23:33.499 14:06:12 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:33.499 14:06:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:33.499 14:06:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:33.499 14:06:12 -- common/autotest_common.sh@10 -- # set +x 00:23:33.499 ************************************ 00:23:33.499 START TEST nvmf_async_init 00:23:33.499 ************************************ 00:23:33.499 14:06:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:33.499 * Looking for test storage... 00:23:33.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:33.499 14:06:13 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:33.499 14:06:13 -- nvmf/common.sh@7 -- # uname -s 00:23:33.499 14:06:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:33.499 14:06:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:33.499 14:06:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:33.499 14:06:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:33.499 14:06:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:33.499 14:06:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:33.499 14:06:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:33.499 14:06:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:33.499 14:06:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:33.499 14:06:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:33.499 14:06:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:23:33.499 14:06:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:23:33.499 14:06:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:33.499 14:06:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:33.499 14:06:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:33.499 14:06:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:33.499 14:06:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:33.499 14:06:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:33.499 14:06:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:33.499 14:06:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:33.500 14:06:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.500 14:06:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.500 14:06:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.500 14:06:13 -- paths/export.sh@5 -- # export PATH 00:23:33.500 14:06:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:33.500 14:06:13 -- nvmf/common.sh@47 -- # : 0 00:23:33.500 14:06:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:33.500 14:06:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:33.500 14:06:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:33.500 14:06:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:33.500 14:06:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:33.500 14:06:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:33.500 14:06:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:33.500 14:06:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:33.500 14:06:13 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:33.500 14:06:13 -- host/async_init.sh@14 -- # null_block_size=512 00:23:33.500 14:06:13 -- host/async_init.sh@15 -- # null_bdev=null0 00:23:33.500 14:06:13 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:33.500 14:06:13 -- host/async_init.sh@20 -- # uuidgen 00:23:33.500 14:06:13 -- host/async_init.sh@20 -- # tr -d - 00:23:33.500 14:06:13 -- host/async_init.sh@20 -- # nguid=d6eb11d17dab4bcaa6254024c76c18d9 00:23:33.500 14:06:13 -- host/async_init.sh@22 -- # nvmftestinit 00:23:33.500 14:06:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:33.500 14:06:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:33.500 14:06:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:33.500 14:06:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:33.500 14:06:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:33.500 14:06:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.500 14:06:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:33.500 14:06:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:33.500 14:06:13 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:33.500 14:06:13 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:33.759 14:06:13 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:33.759 14:06:13 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:33.759 14:06:13 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:33.759 14:06:13 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:33.759 14:06:13 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:33.759 14:06:13 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:33.759 14:06:13 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:33.759 14:06:13 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:33.759 14:06:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:33.759 14:06:13 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:33.759 14:06:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:33.759 14:06:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:33.759 14:06:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:33.759 14:06:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:33.759 14:06:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:33.759 14:06:13 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:33.759 14:06:13 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:33.759 14:06:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:33.759 Cannot find device "nvmf_tgt_br" 00:23:33.759 14:06:13 -- nvmf/common.sh@155 -- # true 00:23:33.759 14:06:13 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:33.759 Cannot find device "nvmf_tgt_br2" 00:23:33.759 14:06:13 -- nvmf/common.sh@156 -- # true 00:23:33.759 14:06:13 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:33.759 14:06:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:33.759 Cannot find device "nvmf_tgt_br" 00:23:33.759 14:06:13 -- nvmf/common.sh@158 -- # true 00:23:33.759 14:06:13 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:33.759 Cannot find device "nvmf_tgt_br2" 00:23:33.759 14:06:13 -- nvmf/common.sh@159 -- # true 00:23:33.759 14:06:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:33.759 14:06:13 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:33.759 14:06:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:33.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:33.759 14:06:13 -- nvmf/common.sh@162 -- # true 00:23:33.759 14:06:13 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:33.759 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:33.759 14:06:13 -- nvmf/common.sh@163 -- # true 00:23:33.759 14:06:13 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:33.759 14:06:13 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:33.759 14:06:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:33.759 14:06:13 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:33.759 14:06:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:33.759 14:06:13 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:33.759 14:06:13 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:33.759 14:06:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:33.759 14:06:13 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:34.018 14:06:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:34.018 14:06:13 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:34.018 14:06:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:34.018 14:06:13 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:34.018 14:06:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:34.018 14:06:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:34.018 14:06:13 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:34.018 14:06:13 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:34.018 14:06:13 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:34.018 14:06:13 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:34.018 14:06:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:34.018 14:06:13 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:34.018 14:06:13 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:34.018 14:06:13 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:34.018 14:06:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:34.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:23:34.018 00:23:34.018 --- 10.0.0.2 ping statistics --- 00:23:34.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.018 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:23:34.018 14:06:13 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:34.018 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:34.018 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:23:34.018 00:23:34.018 --- 10.0.0.3 ping statistics --- 00:23:34.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.018 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:23:34.018 14:06:13 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:34.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:23:34.018 00:23:34.018 --- 10.0.0.1 ping statistics --- 00:23:34.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.018 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:23:34.018 14:06:13 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.018 14:06:13 -- nvmf/common.sh@422 -- # return 0 00:23:34.018 14:06:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:34.018 14:06:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.018 14:06:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:34.018 14:06:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:34.018 14:06:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.018 14:06:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:34.018 14:06:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:34.018 14:06:13 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:34.018 14:06:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:34.018 14:06:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:34.018 14:06:13 -- common/autotest_common.sh@10 -- # set +x 00:23:34.018 14:06:13 -- nvmf/common.sh@470 -- # nvmfpid=82224 00:23:34.018 14:06:13 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:34.018 14:06:13 -- nvmf/common.sh@471 -- # waitforlisten 82224 00:23:34.018 14:06:13 -- common/autotest_common.sh@817 -- # '[' -z 82224 ']' 00:23:34.018 14:06:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.018 14:06:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:34.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.018 14:06:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.018 14:06:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:34.018 14:06:13 -- common/autotest_common.sh@10 -- # set +x 00:23:34.276 [2024-04-26 14:06:13.722045] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:34.276 [2024-04-26 14:06:13.722181] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.276 [2024-04-26 14:06:13.895023] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.569 [2024-04-26 14:06:14.138615] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.569 [2024-04-26 14:06:14.138678] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.569 [2024-04-26 14:06:14.138694] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.569 [2024-04-26 14:06:14.138715] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.569 [2024-04-26 14:06:14.138729] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.569 [2024-04-26 14:06:14.138768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.136 14:06:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:35.136 14:06:14 -- common/autotest_common.sh@850 -- # return 0 00:23:35.136 14:06:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:35.136 14:06:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:35.136 14:06:14 -- common/autotest_common.sh@10 -- # set +x 00:23:35.136 14:06:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.136 14:06:14 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:35.136 14:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.136 14:06:14 -- common/autotest_common.sh@10 -- # set +x 00:23:35.137 [2024-04-26 14:06:14.638088] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.137 14:06:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.137 14:06:14 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:35.137 14:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.137 14:06:14 -- common/autotest_common.sh@10 -- # set +x 00:23:35.137 null0 00:23:35.137 14:06:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.137 14:06:14 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:35.137 14:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.137 14:06:14 -- common/autotest_common.sh@10 -- # set +x 00:23:35.137 14:06:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.137 14:06:14 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:35.137 14:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.137 14:06:14 -- common/autotest_common.sh@10 -- # set +x 00:23:35.137 14:06:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.137 14:06:14 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d6eb11d17dab4bcaa6254024c76c18d9 00:23:35.137 14:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.137 14:06:14 -- common/autotest_common.sh@10 -- # set +x 00:23:35.137 14:06:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.137 14:06:14 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:35.137 14:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.137 14:06:14 -- common/autotest_common.sh@10 -- # set +x 00:23:35.137 [2024-04-26 14:06:14.690211] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.137 14:06:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.137 14:06:14 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:35.137 14:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.137 14:06:14 -- common/autotest_common.sh@10 -- # set +x 00:23:35.395 nvme0n1 00:23:35.395 14:06:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.395 14:06:14 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:35.395 14:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.395 14:06:14 -- common/autotest_common.sh@10 -- # set +x 00:23:35.395 [ 00:23:35.395 { 00:23:35.395 "aliases": [ 00:23:35.395 "d6eb11d1-7dab-4bca-a625-4024c76c18d9" 00:23:35.395 ], 00:23:35.395 "assigned_rate_limits": { 00:23:35.395 "r_mbytes_per_sec": 0, 00:23:35.395 "rw_ios_per_sec": 0, 00:23:35.395 "rw_mbytes_per_sec": 0, 00:23:35.395 "w_mbytes_per_sec": 0 00:23:35.395 }, 00:23:35.395 "block_size": 512, 00:23:35.395 "claimed": false, 00:23:35.395 "driver_specific": { 00:23:35.395 "mp_policy": "active_passive", 00:23:35.395 "nvme": [ 00:23:35.395 { 00:23:35.395 "ctrlr_data": { 00:23:35.395 "ana_reporting": false, 00:23:35.395 "cntlid": 1, 00:23:35.395 "firmware_revision": "24.05", 00:23:35.395 "model_number": "SPDK bdev Controller", 00:23:35.395 "multi_ctrlr": true, 00:23:35.395 "oacs": { 00:23:35.395 "firmware": 0, 00:23:35.395 "format": 0, 00:23:35.395 "ns_manage": 0, 00:23:35.395 "security": 0 00:23:35.395 }, 00:23:35.395 "serial_number": "00000000000000000000", 00:23:35.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:35.395 "vendor_id": "0x8086" 00:23:35.395 }, 00:23:35.395 "ns_data": { 00:23:35.395 "can_share": true, 00:23:35.395 "id": 1 00:23:35.395 }, 00:23:35.395 "trid": { 00:23:35.395 "adrfam": "IPv4", 00:23:35.395 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:35.395 "traddr": "10.0.0.2", 00:23:35.395 "trsvcid": "4420", 00:23:35.395 "trtype": "TCP" 00:23:35.395 }, 00:23:35.395 "vs": { 00:23:35.395 "nvme_version": "1.3" 00:23:35.395 } 00:23:35.395 } 00:23:35.395 ] 00:23:35.395 }, 00:23:35.395 "memory_domains": [ 00:23:35.395 { 00:23:35.395 "dma_device_id": "system", 00:23:35.395 "dma_device_type": 1 00:23:35.395 } 00:23:35.395 ], 00:23:35.395 "name": "nvme0n1", 00:23:35.395 "num_blocks": 2097152, 00:23:35.395 "product_name": "NVMe disk", 00:23:35.395 "supported_io_types": { 00:23:35.395 "abort": true, 00:23:35.395 "compare": true, 00:23:35.395 "compare_and_write": true, 00:23:35.395 "flush": true, 00:23:35.395 "nvme_admin": true, 00:23:35.395 "nvme_io": true, 00:23:35.395 "read": true, 00:23:35.395 "reset": true, 00:23:35.395 "unmap": false, 00:23:35.396 "write": true, 00:23:35.396 "write_zeroes": true 00:23:35.396 }, 00:23:35.396 "uuid": "d6eb11d1-7dab-4bca-a625-4024c76c18d9", 00:23:35.396 "zoned": false 00:23:35.396 } 00:23:35.396 ] 00:23:35.396 14:06:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.396 14:06:14 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:35.396 14:06:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.396 14:06:14 -- common/autotest_common.sh@10 -- # set +x 00:23:35.396 [2024-04-26 14:06:14.979754] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:35.396 [2024-04-26 14:06:14.979912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006840 (9): Bad file descriptor 00:23:35.654 [2024-04-26 14:06:15.122389] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:35.654 14:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.654 14:06:15 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:35.654 14:06:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.654 14:06:15 -- common/autotest_common.sh@10 -- # set +x 00:23:35.654 [ 00:23:35.654 { 00:23:35.654 "aliases": [ 00:23:35.654 "d6eb11d1-7dab-4bca-a625-4024c76c18d9" 00:23:35.654 ], 00:23:35.654 "assigned_rate_limits": { 00:23:35.654 "r_mbytes_per_sec": 0, 00:23:35.654 "rw_ios_per_sec": 0, 00:23:35.654 "rw_mbytes_per_sec": 0, 00:23:35.654 "w_mbytes_per_sec": 0 00:23:35.654 }, 00:23:35.654 "block_size": 512, 00:23:35.654 "claimed": false, 00:23:35.654 "driver_specific": { 00:23:35.654 "mp_policy": "active_passive", 00:23:35.654 "nvme": [ 00:23:35.654 { 00:23:35.654 "ctrlr_data": { 00:23:35.654 "ana_reporting": false, 00:23:35.654 "cntlid": 2, 00:23:35.654 "firmware_revision": "24.05", 00:23:35.654 "model_number": "SPDK bdev Controller", 00:23:35.654 "multi_ctrlr": true, 00:23:35.654 "oacs": { 00:23:35.654 "firmware": 0, 00:23:35.654 "format": 0, 00:23:35.654 "ns_manage": 0, 00:23:35.654 "security": 0 00:23:35.654 }, 00:23:35.654 "serial_number": "00000000000000000000", 00:23:35.654 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:35.654 "vendor_id": "0x8086" 00:23:35.654 }, 00:23:35.654 "ns_data": { 00:23:35.654 "can_share": true, 00:23:35.654 "id": 1 00:23:35.654 }, 00:23:35.654 "trid": { 00:23:35.654 "adrfam": "IPv4", 00:23:35.654 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:35.654 "traddr": "10.0.0.2", 00:23:35.654 "trsvcid": "4420", 00:23:35.654 "trtype": "TCP" 00:23:35.654 }, 00:23:35.654 "vs": { 00:23:35.654 "nvme_version": "1.3" 00:23:35.654 } 00:23:35.654 } 00:23:35.654 ] 00:23:35.654 }, 00:23:35.654 "memory_domains": [ 00:23:35.654 { 00:23:35.654 "dma_device_id": "system", 00:23:35.654 "dma_device_type": 1 00:23:35.654 } 00:23:35.654 ], 00:23:35.654 "name": "nvme0n1", 00:23:35.654 "num_blocks": 2097152, 00:23:35.654 "product_name": "NVMe disk", 00:23:35.654 "supported_io_types": { 00:23:35.654 "abort": true, 00:23:35.654 "compare": true, 00:23:35.655 "compare_and_write": true, 00:23:35.655 "flush": true, 00:23:35.655 "nvme_admin": true, 00:23:35.655 "nvme_io": true, 00:23:35.655 "read": true, 00:23:35.655 "reset": true, 00:23:35.655 "unmap": false, 00:23:35.655 "write": true, 00:23:35.655 "write_zeroes": true 00:23:35.655 }, 00:23:35.655 "uuid": "d6eb11d1-7dab-4bca-a625-4024c76c18d9", 00:23:35.655 "zoned": false 00:23:35.655 } 00:23:35.655 ] 00:23:35.655 14:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.655 14:06:15 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.655 14:06:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.655 14:06:15 -- common/autotest_common.sh@10 -- # set +x 00:23:35.655 14:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.655 14:06:15 -- host/async_init.sh@53 -- # mktemp 00:23:35.655 14:06:15 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.JHtybFz83V 00:23:35.655 14:06:15 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:35.655 14:06:15 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.JHtybFz83V 00:23:35.655 14:06:15 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:35.655 14:06:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.655 14:06:15 -- common/autotest_common.sh@10 -- # set +x 00:23:35.655 14:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.655 14:06:15 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:35.655 14:06:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.655 14:06:15 -- common/autotest_common.sh@10 -- # set +x 00:23:35.655 [2024-04-26 14:06:15.220047] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:35.655 [2024-04-26 14:06:15.220332] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:35.655 14:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.655 14:06:15 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JHtybFz83V 00:23:35.655 14:06:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.655 14:06:15 -- common/autotest_common.sh@10 -- # set +x 00:23:35.655 [2024-04-26 14:06:15.232031] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:35.655 14:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.655 14:06:15 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.JHtybFz83V 00:23:35.655 14:06:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.655 14:06:15 -- common/autotest_common.sh@10 -- # set +x 00:23:35.655 [2024-04-26 14:06:15.243949] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.655 [2024-04-26 14:06:15.244069] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:35.655 nvme0n1 00:23:35.655 14:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.655 14:06:15 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:35.655 14:06:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.655 14:06:15 -- common/autotest_common.sh@10 -- # set +x 00:23:35.914 [ 00:23:35.914 { 00:23:35.914 "aliases": [ 00:23:35.914 "d6eb11d1-7dab-4bca-a625-4024c76c18d9" 00:23:35.914 ], 00:23:35.914 "assigned_rate_limits": { 00:23:35.914 "r_mbytes_per_sec": 0, 00:23:35.914 "rw_ios_per_sec": 0, 00:23:35.914 "rw_mbytes_per_sec": 0, 00:23:35.914 "w_mbytes_per_sec": 0 00:23:35.914 }, 00:23:35.914 "block_size": 512, 00:23:35.914 "claimed": false, 00:23:35.914 "driver_specific": { 00:23:35.914 "mp_policy": "active_passive", 00:23:35.914 "nvme": [ 00:23:35.914 { 00:23:35.914 "ctrlr_data": { 00:23:35.914 "ana_reporting": false, 00:23:35.914 "cntlid": 3, 00:23:35.914 "firmware_revision": "24.05", 00:23:35.914 "model_number": "SPDK bdev Controller", 00:23:35.914 "multi_ctrlr": true, 00:23:35.914 "oacs": { 00:23:35.914 "firmware": 0, 00:23:35.914 "format": 0, 00:23:35.914 "ns_manage": 0, 00:23:35.914 "security": 0 00:23:35.914 }, 00:23:35.914 "serial_number": "00000000000000000000", 00:23:35.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:35.914 "vendor_id": "0x8086" 00:23:35.914 }, 00:23:35.914 "ns_data": { 00:23:35.914 "can_share": true, 00:23:35.914 "id": 1 00:23:35.914 }, 00:23:35.914 "trid": { 00:23:35.914 "adrfam": "IPv4", 00:23:35.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:35.914 "traddr": "10.0.0.2", 00:23:35.914 "trsvcid": "4421", 00:23:35.914 "trtype": "TCP" 00:23:35.914 }, 00:23:35.914 "vs": { 00:23:35.914 "nvme_version": "1.3" 00:23:35.914 } 00:23:35.914 } 00:23:35.914 ] 00:23:35.914 }, 00:23:35.914 "memory_domains": [ 00:23:35.914 { 00:23:35.914 "dma_device_id": "system", 00:23:35.914 "dma_device_type": 1 00:23:35.914 } 00:23:35.914 ], 00:23:35.914 "name": "nvme0n1", 00:23:35.914 "num_blocks": 2097152, 00:23:35.914 "product_name": "NVMe disk", 00:23:35.914 "supported_io_types": { 00:23:35.914 "abort": true, 00:23:35.914 "compare": true, 00:23:35.914 "compare_and_write": true, 00:23:35.914 "flush": true, 00:23:35.914 "nvme_admin": true, 00:23:35.914 "nvme_io": true, 00:23:35.914 "read": true, 00:23:35.914 "reset": true, 00:23:35.914 "unmap": false, 00:23:35.914 "write": true, 00:23:35.914 "write_zeroes": true 00:23:35.914 }, 00:23:35.914 "uuid": "d6eb11d1-7dab-4bca-a625-4024c76c18d9", 00:23:35.914 "zoned": false 00:23:35.914 } 00:23:35.914 ] 00:23:35.914 14:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.914 14:06:15 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.914 14:06:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.914 14:06:15 -- common/autotest_common.sh@10 -- # set +x 00:23:35.914 14:06:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.914 14:06:15 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.JHtybFz83V 00:23:35.914 14:06:15 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:35.914 14:06:15 -- host/async_init.sh@78 -- # nvmftestfini 00:23:35.914 14:06:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:35.914 14:06:15 -- nvmf/common.sh@117 -- # sync 00:23:35.914 14:06:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:35.914 14:06:15 -- nvmf/common.sh@120 -- # set +e 00:23:35.914 14:06:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:35.914 14:06:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:35.914 rmmod nvme_tcp 00:23:35.914 rmmod nvme_fabrics 00:23:35.914 rmmod nvme_keyring 00:23:35.914 14:06:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:35.914 14:06:15 -- nvmf/common.sh@124 -- # set -e 00:23:35.914 14:06:15 -- nvmf/common.sh@125 -- # return 0 00:23:35.914 14:06:15 -- nvmf/common.sh@478 -- # '[' -n 82224 ']' 00:23:35.914 14:06:15 -- nvmf/common.sh@479 -- # killprocess 82224 00:23:35.914 14:06:15 -- common/autotest_common.sh@936 -- # '[' -z 82224 ']' 00:23:35.914 14:06:15 -- common/autotest_common.sh@940 -- # kill -0 82224 00:23:35.914 14:06:15 -- common/autotest_common.sh@941 -- # uname 00:23:35.914 14:06:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:35.914 14:06:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82224 00:23:35.914 killing process with pid 82224 00:23:35.914 14:06:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:35.914 14:06:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:35.914 14:06:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82224' 00:23:35.914 14:06:15 -- common/autotest_common.sh@955 -- # kill 82224 00:23:35.914 14:06:15 -- common/autotest_common.sh@960 -- # wait 82224 00:23:35.914 [2024-04-26 14:06:15.542496] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:35.914 [2024-04-26 14:06:15.542564] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:37.291 14:06:16 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:37.291 14:06:16 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:37.291 14:06:16 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:37.291 14:06:16 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:37.291 14:06:16 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:37.291 14:06:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.291 14:06:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.291 14:06:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.291 14:06:16 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:37.291 00:23:37.291 real 0m3.901s 00:23:37.291 user 0m3.405s 00:23:37.291 sys 0m0.943s 00:23:37.291 14:06:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:37.291 14:06:16 -- common/autotest_common.sh@10 -- # set +x 00:23:37.291 ************************************ 00:23:37.291 END TEST nvmf_async_init 00:23:37.291 ************************************ 00:23:37.291 14:06:16 -- nvmf/nvmf.sh@92 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:37.291 14:06:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:37.291 14:06:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:37.291 14:06:16 -- common/autotest_common.sh@10 -- # set +x 00:23:37.550 ************************************ 00:23:37.550 START TEST dma 00:23:37.550 ************************************ 00:23:37.550 14:06:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:37.550 * Looking for test storage... 00:23:37.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:37.550 14:06:17 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:37.550 14:06:17 -- nvmf/common.sh@7 -- # uname -s 00:23:37.550 14:06:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.550 14:06:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.550 14:06:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.550 14:06:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.550 14:06:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.550 14:06:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.550 14:06:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.550 14:06:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.551 14:06:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.551 14:06:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.551 14:06:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:23:37.551 14:06:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:23:37.551 14:06:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.551 14:06:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.551 14:06:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:37.551 14:06:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.551 14:06:17 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:37.551 14:06:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.551 14:06:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.551 14:06:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.551 14:06:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.551 14:06:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.551 14:06:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.551 14:06:17 -- paths/export.sh@5 -- # export PATH 00:23:37.551 14:06:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.551 14:06:17 -- nvmf/common.sh@47 -- # : 0 00:23:37.551 14:06:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:37.551 14:06:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:37.551 14:06:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.551 14:06:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.551 14:06:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.551 14:06:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:37.551 14:06:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:37.551 14:06:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:37.551 14:06:17 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:37.551 14:06:17 -- host/dma.sh@13 -- # exit 0 00:23:37.551 00:23:37.551 real 0m0.176s 00:23:37.551 user 0m0.077s 00:23:37.551 sys 0m0.097s 00:23:37.551 14:06:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:37.551 14:06:17 -- common/autotest_common.sh@10 -- # set +x 00:23:37.551 ************************************ 00:23:37.551 END TEST dma 00:23:37.551 ************************************ 00:23:37.810 14:06:17 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:37.810 14:06:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:37.810 14:06:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:37.810 14:06:17 -- common/autotest_common.sh@10 -- # set +x 00:23:37.810 ************************************ 00:23:37.810 START TEST nvmf_identify 00:23:37.810 ************************************ 00:23:37.810 14:06:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:37.810 * Looking for test storage... 00:23:38.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:38.069 14:06:17 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:38.069 14:06:17 -- nvmf/common.sh@7 -- # uname -s 00:23:38.069 14:06:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.069 14:06:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.069 14:06:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.069 14:06:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.069 14:06:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.069 14:06:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.069 14:06:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.069 14:06:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.069 14:06:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.069 14:06:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.069 14:06:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:23:38.069 14:06:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:23:38.069 14:06:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.069 14:06:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.069 14:06:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:38.069 14:06:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.069 14:06:17 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:38.069 14:06:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.069 14:06:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.069 14:06:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.069 14:06:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.069 14:06:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.069 14:06:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.069 14:06:17 -- paths/export.sh@5 -- # export PATH 00:23:38.069 14:06:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.070 14:06:17 -- nvmf/common.sh@47 -- # : 0 00:23:38.070 14:06:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:38.070 14:06:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:38.070 14:06:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.070 14:06:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.070 14:06:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.070 14:06:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:38.070 14:06:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:38.070 14:06:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:38.070 14:06:17 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:38.070 14:06:17 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:38.070 14:06:17 -- host/identify.sh@14 -- # nvmftestinit 00:23:38.070 14:06:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:38.070 14:06:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.070 14:06:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:38.070 14:06:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:38.070 14:06:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:38.070 14:06:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.070 14:06:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.070 14:06:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.070 14:06:17 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:38.070 14:06:17 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:38.070 14:06:17 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:38.070 14:06:17 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:38.070 14:06:17 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:38.070 14:06:17 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:38.070 14:06:17 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.070 14:06:17 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.070 14:06:17 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:38.070 14:06:17 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:38.070 14:06:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:38.070 14:06:17 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:38.070 14:06:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:38.070 14:06:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.070 14:06:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:38.070 14:06:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:38.070 14:06:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:38.070 14:06:17 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:38.070 14:06:17 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:38.070 14:06:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:38.070 Cannot find device "nvmf_tgt_br" 00:23:38.070 14:06:17 -- nvmf/common.sh@155 -- # true 00:23:38.070 14:06:17 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:38.070 Cannot find device "nvmf_tgt_br2" 00:23:38.070 14:06:17 -- nvmf/common.sh@156 -- # true 00:23:38.070 14:06:17 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:38.070 14:06:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:38.070 Cannot find device "nvmf_tgt_br" 00:23:38.070 14:06:17 -- nvmf/common.sh@158 -- # true 00:23:38.070 14:06:17 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:38.070 Cannot find device "nvmf_tgt_br2" 00:23:38.070 14:06:17 -- nvmf/common.sh@159 -- # true 00:23:38.070 14:06:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:38.070 14:06:17 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:38.070 14:06:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:38.070 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:38.070 14:06:17 -- nvmf/common.sh@162 -- # true 00:23:38.070 14:06:17 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:38.329 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:38.329 14:06:17 -- nvmf/common.sh@163 -- # true 00:23:38.329 14:06:17 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:38.329 14:06:17 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:38.329 14:06:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:38.329 14:06:17 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:38.329 14:06:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:38.329 14:06:17 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:38.329 14:06:17 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:38.329 14:06:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:38.329 14:06:17 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:38.329 14:06:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:38.329 14:06:17 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:38.329 14:06:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:38.329 14:06:17 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:38.329 14:06:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:38.329 14:06:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:38.329 14:06:17 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:38.329 14:06:17 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:38.329 14:06:17 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:38.329 14:06:17 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:38.329 14:06:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:38.329 14:06:17 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:38.329 14:06:17 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:38.329 14:06:17 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:38.329 14:06:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:38.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:23:38.329 00:23:38.329 --- 10.0.0.2 ping statistics --- 00:23:38.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.329 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:23:38.329 14:06:17 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:38.329 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:38.329 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:23:38.329 00:23:38.329 --- 10.0.0.3 ping statistics --- 00:23:38.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.329 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:23:38.329 14:06:17 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:38.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:23:38.329 00:23:38.329 --- 10.0.0.1 ping statistics --- 00:23:38.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.329 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:38.329 14:06:17 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.329 14:06:17 -- nvmf/common.sh@422 -- # return 0 00:23:38.329 14:06:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:38.329 14:06:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.329 14:06:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:38.329 14:06:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:38.329 14:06:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.329 14:06:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:38.329 14:06:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:38.588 14:06:18 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:38.588 14:06:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:38.589 14:06:18 -- common/autotest_common.sh@10 -- # set +x 00:23:38.589 14:06:18 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:38.589 14:06:18 -- host/identify.sh@19 -- # nvmfpid=82518 00:23:38.589 14:06:18 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:38.589 14:06:18 -- host/identify.sh@23 -- # waitforlisten 82518 00:23:38.589 14:06:18 -- common/autotest_common.sh@817 -- # '[' -z 82518 ']' 00:23:38.589 14:06:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.589 14:06:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:38.589 14:06:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.589 14:06:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:38.589 14:06:18 -- common/autotest_common.sh@10 -- # set +x 00:23:38.589 [2024-04-26 14:06:18.110981] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:38.589 [2024-04-26 14:06:18.111096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.848 [2024-04-26 14:06:18.288675] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.107 [2024-04-26 14:06:18.533551] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.107 [2024-04-26 14:06:18.533617] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.107 [2024-04-26 14:06:18.533634] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.107 [2024-04-26 14:06:18.533646] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.107 [2024-04-26 14:06:18.533659] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.107 [2024-04-26 14:06:18.533893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.107 [2024-04-26 14:06:18.534114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.107 [2024-04-26 14:06:18.534314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.107 [2024-04-26 14:06:18.534355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.366 14:06:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:39.366 14:06:18 -- common/autotest_common.sh@850 -- # return 0 00:23:39.366 14:06:18 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:39.366 14:06:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.366 14:06:18 -- common/autotest_common.sh@10 -- # set +x 00:23:39.366 [2024-04-26 14:06:18.990533] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:39.366 14:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:39.366 14:06:19 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:39.366 14:06:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:39.366 14:06:19 -- common/autotest_common.sh@10 -- # set +x 00:23:39.625 14:06:19 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:39.625 14:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.625 14:06:19 -- common/autotest_common.sh@10 -- # set +x 00:23:39.625 Malloc0 00:23:39.625 14:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:39.625 14:06:19 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:39.625 14:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.625 14:06:19 -- common/autotest_common.sh@10 -- # set +x 00:23:39.625 14:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:39.625 14:06:19 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:39.625 14:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.625 14:06:19 -- common/autotest_common.sh@10 -- # set +x 00:23:39.625 14:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:39.625 14:06:19 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:39.625 14:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.625 14:06:19 -- common/autotest_common.sh@10 -- # set +x 00:23:39.625 [2024-04-26 14:06:19.183743] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:39.625 14:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:39.625 14:06:19 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:39.625 14:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.625 14:06:19 -- common/autotest_common.sh@10 -- # set +x 00:23:39.625 14:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:39.625 14:06:19 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:39.625 14:06:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:39.625 14:06:19 -- common/autotest_common.sh@10 -- # set +x 00:23:39.625 [2024-04-26 14:06:19.207403] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:23:39.625 [ 00:23:39.625 { 00:23:39.625 "allow_any_host": true, 00:23:39.625 "hosts": [], 00:23:39.625 "listen_addresses": [ 00:23:39.625 { 00:23:39.625 "adrfam": "IPv4", 00:23:39.625 "traddr": "10.0.0.2", 00:23:39.625 "transport": "TCP", 00:23:39.625 "trsvcid": "4420", 00:23:39.625 "trtype": "TCP" 00:23:39.625 } 00:23:39.625 ], 00:23:39.625 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:39.625 "subtype": "Discovery" 00:23:39.625 }, 00:23:39.625 { 00:23:39.625 "allow_any_host": true, 00:23:39.625 "hosts": [], 00:23:39.625 "listen_addresses": [ 00:23:39.625 { 00:23:39.625 "adrfam": "IPv4", 00:23:39.625 "traddr": "10.0.0.2", 00:23:39.625 "transport": "TCP", 00:23:39.625 "trsvcid": "4420", 00:23:39.625 "trtype": "TCP" 00:23:39.625 } 00:23:39.625 ], 00:23:39.625 "max_cntlid": 65519, 00:23:39.625 "max_namespaces": 32, 00:23:39.625 "min_cntlid": 1, 00:23:39.625 "model_number": "SPDK bdev Controller", 00:23:39.625 "namespaces": [ 00:23:39.625 { 00:23:39.625 "bdev_name": "Malloc0", 00:23:39.625 "eui64": "ABCDEF0123456789", 00:23:39.625 "name": "Malloc0", 00:23:39.625 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:39.625 "nsid": 1, 00:23:39.625 "uuid": "f3ecdb21-14fa-42ae-8871-713ea4001e3b" 00:23:39.625 } 00:23:39.625 ], 00:23:39.625 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.625 "serial_number": "SPDK00000000000001", 00:23:39.625 "subtype": "NVMe" 00:23:39.625 } 00:23:39.625 ] 00:23:39.625 14:06:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:39.625 14:06:19 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:39.625 [2024-04-26 14:06:19.275768] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:39.625 [2024-04-26 14:06:19.276053] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82571 ] 00:23:39.887 [2024-04-26 14:06:19.432619] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:39.887 [2024-04-26 14:06:19.432753] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:39.887 [2024-04-26 14:06:19.432768] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:39.887 [2024-04-26 14:06:19.432792] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:39.887 [2024-04-26 14:06:19.432808] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:39.887 [2024-04-26 14:06:19.432964] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:39.887 [2024-04-26 14:06:19.433013] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x614000002040 0 00:23:39.887 [2024-04-26 14:06:19.438183] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:39.887 [2024-04-26 14:06:19.438221] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:39.887 [2024-04-26 14:06:19.438230] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:39.887 [2024-04-26 14:06:19.438238] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:39.887 [2024-04-26 14:06:19.438324] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.887 [2024-04-26 14:06:19.438341] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.887 [2024-04-26 14:06:19.438352] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:39.887 [2024-04-26 14:06:19.438376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:39.887 [2024-04-26 14:06:19.438414] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:39.887 [2024-04-26 14:06:19.446187] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.887 [2024-04-26 14:06:19.446220] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.887 [2024-04-26 14:06:19.446227] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.887 [2024-04-26 14:06:19.446235] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:39.887 [2024-04-26 14:06:19.446265] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:39.887 [2024-04-26 14:06:19.446284] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:39.887 [2024-04-26 14:06:19.446294] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:39.887 [2024-04-26 14:06:19.446316] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.887 [2024-04-26 14:06:19.446324] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.887 [2024-04-26 14:06:19.446331] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:39.887 [2024-04-26 14:06:19.446346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.887 [2024-04-26 14:06:19.446378] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:39.887 [2024-04-26 14:06:19.446477] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.887 [2024-04-26 14:06:19.446489] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.887 [2024-04-26 14:06:19.446496] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.887 [2024-04-26 14:06:19.446502] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:39.887 [2024-04-26 14:06:19.446515] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:39.887 [2024-04-26 14:06:19.446526] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:39.887 [2024-04-26 14:06:19.446537] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.887 [2024-04-26 14:06:19.446546] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.446553] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:39.888 [2024-04-26 14:06:19.446567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.888 [2024-04-26 14:06:19.446587] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:39.888 [2024-04-26 14:06:19.446643] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.888 [2024-04-26 14:06:19.446654] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.888 [2024-04-26 14:06:19.446659] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.446665] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:39.888 [2024-04-26 14:06:19.446673] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:39.888 [2024-04-26 14:06:19.446685] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:39.888 [2024-04-26 14:06:19.446695] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.446705] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.446711] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:39.888 [2024-04-26 14:06:19.446721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.888 [2024-04-26 14:06:19.446739] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:39.888 [2024-04-26 14:06:19.446790] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.888 [2024-04-26 14:06:19.446798] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.888 [2024-04-26 14:06:19.446804] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.446809] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:39.888 [2024-04-26 14:06:19.446817] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:39.888 [2024-04-26 14:06:19.446830] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.446836] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.446843] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:39.888 [2024-04-26 14:06:19.446853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.888 [2024-04-26 14:06:19.446883] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:39.888 [2024-04-26 14:06:19.446933] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.888 [2024-04-26 14:06:19.446941] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.888 [2024-04-26 14:06:19.446946] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.446952] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:39.888 [2024-04-26 14:06:19.446960] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:39.888 [2024-04-26 14:06:19.446968] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:39.888 [2024-04-26 14:06:19.446981] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:39.888 [2024-04-26 14:06:19.447089] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:39.888 [2024-04-26 14:06:19.447097] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:39.888 [2024-04-26 14:06:19.447109] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447116] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447123] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:39.888 [2024-04-26 14:06:19.447133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.888 [2024-04-26 14:06:19.447165] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:39.888 [2024-04-26 14:06:19.447228] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.888 [2024-04-26 14:06:19.447237] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.888 [2024-04-26 14:06:19.447242] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447247] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:39.888 [2024-04-26 14:06:19.447255] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:39.888 [2024-04-26 14:06:19.447269] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447275] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447281] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:39.888 [2024-04-26 14:06:19.447292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.888 [2024-04-26 14:06:19.447310] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:39.888 [2024-04-26 14:06:19.447368] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.888 [2024-04-26 14:06:19.447376] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.888 [2024-04-26 14:06:19.447381] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447387] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:39.888 [2024-04-26 14:06:19.447394] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:39.888 [2024-04-26 14:06:19.447402] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:39.888 [2024-04-26 14:06:19.447430] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:39.888 [2024-04-26 14:06:19.447446] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:39.888 [2024-04-26 14:06:19.447466] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447472] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:39.888 [2024-04-26 14:06:19.447483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.888 [2024-04-26 14:06:19.447504] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:39.888 [2024-04-26 14:06:19.447602] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:39.888 [2024-04-26 14:06:19.447610] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:39.888 [2024-04-26 14:06:19.447616] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447623] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=0 00:23:39.888 [2024-04-26 14:06:19.447630] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:23:39.888 [2024-04-26 14:06:19.447638] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447649] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447658] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447670] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.888 [2024-04-26 14:06:19.447682] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.888 [2024-04-26 14:06:19.447687] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447693] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:39.888 [2024-04-26 14:06:19.447708] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:39.888 [2024-04-26 14:06:19.447718] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:39.888 [2024-04-26 14:06:19.447726] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:39.888 [2024-04-26 14:06:19.447737] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:39.888 [2024-04-26 14:06:19.447744] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:39.888 [2024-04-26 14:06:19.447752] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:39.888 [2024-04-26 14:06:19.447763] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:39.888 [2024-04-26 14:06:19.447777] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447784] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447790] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:39.888 [2024-04-26 14:06:19.447805] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:39.888 [2024-04-26 14:06:19.447824] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:39.888 [2024-04-26 14:06:19.447896] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.888 [2024-04-26 14:06:19.447904] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.888 [2024-04-26 14:06:19.447910] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447915] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:39.888 [2024-04-26 14:06:19.447928] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447934] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447940] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:39.888 [2024-04-26 14:06:19.447953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.888 [2024-04-26 14:06:19.447963] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447969] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.888 [2024-04-26 14:06:19.447974] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x614000002040) 00:23:39.889 [2024-04-26 14:06:19.447986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.889 [2024-04-26 14:06:19.447994] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.447999] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.448005] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x614000002040) 00:23:39.889 [2024-04-26 14:06:19.448014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.889 [2024-04-26 14:06:19.448022] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.448028] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.448033] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:39.889 [2024-04-26 14:06:19.448042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.889 [2024-04-26 14:06:19.448049] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:39.889 [2024-04-26 14:06:19.448063] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:39.889 [2024-04-26 14:06:19.448072] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.448085] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:39.889 [2024-04-26 14:06:19.448098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.889 [2024-04-26 14:06:19.448123] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:39.889 [2024-04-26 14:06:19.448131] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:23:39.889 [2024-04-26 14:06:19.448138] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:23:39.889 [2024-04-26 14:06:19.448144] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:39.889 [2024-04-26 14:06:19.448150] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:39.889 [2024-04-26 14:06:19.448250] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.889 [2024-04-26 14:06:19.448261] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.889 [2024-04-26 14:06:19.448266] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.448272] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:39.889 [2024-04-26 14:06:19.448283] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:39.889 [2024-04-26 14:06:19.448292] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:39.889 [2024-04-26 14:06:19.448310] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.448317] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:39.889 [2024-04-26 14:06:19.448330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.889 [2024-04-26 14:06:19.448350] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:39.889 [2024-04-26 14:06:19.448421] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:39.889 [2024-04-26 14:06:19.448432] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:39.889 [2024-04-26 14:06:19.448443] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.448450] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:23:39.889 [2024-04-26 14:06:19.448458] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:23:39.889 [2024-04-26 14:06:19.448465] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.448475] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.448481] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.448492] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.889 [2024-04-26 14:06:19.448500] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.889 [2024-04-26 14:06:19.448505] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.448512] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:39.889 [2024-04-26 14:06:19.448534] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:39.889 [2024-04-26 14:06:19.448584] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.448592] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:39.889 [2024-04-26 14:06:19.448603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.889 [2024-04-26 14:06:19.448613] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.448619] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.448625] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:23:39.889 [2024-04-26 14:06:19.448637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.889 [2024-04-26 14:06:19.448661] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:39.889 [2024-04-26 14:06:19.448669] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:23:39.889 [2024-04-26 14:06:19.448973] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:39.889 [2024-04-26 14:06:19.448992] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:39.889 [2024-04-26 14:06:19.448999] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.449005] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=1024, cccid=4 00:23:39.889 [2024-04-26 14:06:19.449016] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=1024 00:23:39.889 [2024-04-26 14:06:19.449023] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.449037] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.449044] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.449054] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.889 [2024-04-26 14:06:19.449063] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.889 [2024-04-26 14:06:19.449068] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.449074] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:23:39.889 [2024-04-26 14:06:19.494188] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.889 [2024-04-26 14:06:19.494232] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.889 [2024-04-26 14:06:19.494241] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.494249] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:39.889 [2024-04-26 14:06:19.494286] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.494294] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:39.889 [2024-04-26 14:06:19.494312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.889 [2024-04-26 14:06:19.494350] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:39.889 [2024-04-26 14:06:19.494503] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:39.889 [2024-04-26 14:06:19.494512] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:39.889 [2024-04-26 14:06:19.494518] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.494525] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=3072, cccid=4 00:23:39.889 [2024-04-26 14:06:19.494533] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=3072 00:23:39.889 [2024-04-26 14:06:19.494540] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.494551] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.494557] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.494568] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.889 [2024-04-26 14:06:19.494576] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.889 [2024-04-26 14:06:19.494581] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.494587] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:39.889 [2024-04-26 14:06:19.494602] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.494616] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:39.889 [2024-04-26 14:06:19.494628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.889 [2024-04-26 14:06:19.494654] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:39.889 [2024-04-26 14:06:19.494743] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:39.889 [2024-04-26 14:06:19.494751] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:39.889 [2024-04-26 14:06:19.494757] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.494762] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=8, cccid=4 00:23:39.889 [2024-04-26 14:06:19.494769] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=8 00:23:39.889 [2024-04-26 14:06:19.494776] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.494788] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.494793] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.536263] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.889 [2024-04-26 14:06:19.536301] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.889 [2024-04-26 14:06:19.536309] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.889 [2024-04-26 14:06:19.536317] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:39.889 ===================================================== 00:23:39.889 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:39.890 ===================================================== 00:23:39.890 Controller Capabilities/Features 00:23:39.890 ================================ 00:23:39.890 Vendor ID: 0000 00:23:39.890 Subsystem Vendor ID: 0000 00:23:39.890 Serial Number: .................... 00:23:39.890 Model Number: ........................................ 00:23:39.890 Firmware Version: 24.05 00:23:39.890 Recommended Arb Burst: 0 00:23:39.890 IEEE OUI Identifier: 00 00 00 00:23:39.890 Multi-path I/O 00:23:39.890 May have multiple subsystem ports: No 00:23:39.890 May have multiple controllers: No 00:23:39.890 Associated with SR-IOV VF: No 00:23:39.890 Max Data Transfer Size: 131072 00:23:39.890 Max Number of Namespaces: 0 00:23:39.890 Max Number of I/O Queues: 1024 00:23:39.890 NVMe Specification Version (VS): 1.3 00:23:39.890 NVMe Specification Version (Identify): 1.3 00:23:39.890 Maximum Queue Entries: 128 00:23:39.890 Contiguous Queues Required: Yes 00:23:39.890 Arbitration Mechanisms Supported 00:23:39.890 Weighted Round Robin: Not Supported 00:23:39.890 Vendor Specific: Not Supported 00:23:39.890 Reset Timeout: 15000 ms 00:23:39.890 Doorbell Stride: 4 bytes 00:23:39.890 NVM Subsystem Reset: Not Supported 00:23:39.890 Command Sets Supported 00:23:39.890 NVM Command Set: Supported 00:23:39.890 Boot Partition: Not Supported 00:23:39.890 Memory Page Size Minimum: 4096 bytes 00:23:39.890 Memory Page Size Maximum: 4096 bytes 00:23:39.890 Persistent Memory Region: Not Supported 00:23:39.890 Optional Asynchronous Events Supported 00:23:39.890 Namespace Attribute Notices: Not Supported 00:23:39.890 Firmware Activation Notices: Not Supported 00:23:39.890 ANA Change Notices: Not Supported 00:23:39.890 PLE Aggregate Log Change Notices: Not Supported 00:23:39.890 LBA Status Info Alert Notices: Not Supported 00:23:39.890 EGE Aggregate Log Change Notices: Not Supported 00:23:39.890 Normal NVM Subsystem Shutdown event: Not Supported 00:23:39.890 Zone Descriptor Change Notices: Not Supported 00:23:39.890 Discovery Log Change Notices: Supported 00:23:39.890 Controller Attributes 00:23:39.890 128-bit Host Identifier: Not Supported 00:23:39.890 Non-Operational Permissive Mode: Not Supported 00:23:39.890 NVM Sets: Not Supported 00:23:39.890 Read Recovery Levels: Not Supported 00:23:39.890 Endurance Groups: Not Supported 00:23:39.890 Predictable Latency Mode: Not Supported 00:23:39.890 Traffic Based Keep ALive: Not Supported 00:23:39.890 Namespace Granularity: Not Supported 00:23:39.890 SQ Associations: Not Supported 00:23:39.890 UUID List: Not Supported 00:23:39.890 Multi-Domain Subsystem: Not Supported 00:23:39.890 Fixed Capacity Management: Not Supported 00:23:39.890 Variable Capacity Management: Not Supported 00:23:39.890 Delete Endurance Group: Not Supported 00:23:39.890 Delete NVM Set: Not Supported 00:23:39.890 Extended LBA Formats Supported: Not Supported 00:23:39.890 Flexible Data Placement Supported: Not Supported 00:23:39.890 00:23:39.890 Controller Memory Buffer Support 00:23:39.890 ================================ 00:23:39.890 Supported: No 00:23:39.890 00:23:39.890 Persistent Memory Region Support 00:23:39.890 ================================ 00:23:39.890 Supported: No 00:23:39.890 00:23:39.890 Admin Command Set Attributes 00:23:39.890 ============================ 00:23:39.890 Security Send/Receive: Not Supported 00:23:39.890 Format NVM: Not Supported 00:23:39.890 Firmware Activate/Download: Not Supported 00:23:39.890 Namespace Management: Not Supported 00:23:39.890 Device Self-Test: Not Supported 00:23:39.890 Directives: Not Supported 00:23:39.890 NVMe-MI: Not Supported 00:23:39.890 Virtualization Management: Not Supported 00:23:39.890 Doorbell Buffer Config: Not Supported 00:23:39.890 Get LBA Status Capability: Not Supported 00:23:39.890 Command & Feature Lockdown Capability: Not Supported 00:23:39.890 Abort Command Limit: 1 00:23:39.890 Async Event Request Limit: 4 00:23:39.890 Number of Firmware Slots: N/A 00:23:39.890 Firmware Slot 1 Read-Only: N/A 00:23:39.890 Firmware Activation Without Reset: N/A 00:23:39.890 Multiple Update Detection Support: N/A 00:23:39.890 Firmware Update Granularity: No Information Provided 00:23:39.890 Per-Namespace SMART Log: No 00:23:39.890 Asymmetric Namespace Access Log Page: Not Supported 00:23:39.890 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:39.890 Command Effects Log Page: Not Supported 00:23:39.890 Get Log Page Extended Data: Supported 00:23:39.890 Telemetry Log Pages: Not Supported 00:23:39.890 Persistent Event Log Pages: Not Supported 00:23:39.890 Supported Log Pages Log Page: May Support 00:23:39.890 Commands Supported & Effects Log Page: Not Supported 00:23:39.890 Feature Identifiers & Effects Log Page:May Support 00:23:39.890 NVMe-MI Commands & Effects Log Page: May Support 00:23:39.890 Data Area 4 for Telemetry Log: Not Supported 00:23:39.890 Error Log Page Entries Supported: 128 00:23:39.890 Keep Alive: Not Supported 00:23:39.890 00:23:39.890 NVM Command Set Attributes 00:23:39.890 ========================== 00:23:39.890 Submission Queue Entry Size 00:23:39.890 Max: 1 00:23:39.890 Min: 1 00:23:39.890 Completion Queue Entry Size 00:23:39.890 Max: 1 00:23:39.890 Min: 1 00:23:39.890 Number of Namespaces: 0 00:23:39.890 Compare Command: Not Supported 00:23:39.890 Write Uncorrectable Command: Not Supported 00:23:39.890 Dataset Management Command: Not Supported 00:23:39.890 Write Zeroes Command: Not Supported 00:23:39.890 Set Features Save Field: Not Supported 00:23:39.890 Reservations: Not Supported 00:23:39.890 Timestamp: Not Supported 00:23:39.890 Copy: Not Supported 00:23:39.890 Volatile Write Cache: Not Present 00:23:39.890 Atomic Write Unit (Normal): 1 00:23:39.890 Atomic Write Unit (PFail): 1 00:23:39.890 Atomic Compare & Write Unit: 1 00:23:39.890 Fused Compare & Write: Supported 00:23:39.890 Scatter-Gather List 00:23:39.890 SGL Command Set: Supported 00:23:39.890 SGL Keyed: Supported 00:23:39.890 SGL Bit Bucket Descriptor: Not Supported 00:23:39.890 SGL Metadata Pointer: Not Supported 00:23:39.890 Oversized SGL: Not Supported 00:23:39.890 SGL Metadata Address: Not Supported 00:23:39.890 SGL Offset: Supported 00:23:39.890 Transport SGL Data Block: Not Supported 00:23:39.890 Replay Protected Memory Block: Not Supported 00:23:39.890 00:23:39.890 Firmware Slot Information 00:23:39.890 ========================= 00:23:39.890 Active slot: 0 00:23:39.890 00:23:39.890 00:23:39.890 Error Log 00:23:39.890 ========= 00:23:39.890 00:23:39.890 Active Namespaces 00:23:39.890 ================= 00:23:39.890 Discovery Log Page 00:23:39.890 ================== 00:23:39.890 Generation Counter: 2 00:23:39.890 Number of Records: 2 00:23:39.890 Record Format: 0 00:23:39.890 00:23:39.890 Discovery Log Entry 0 00:23:39.890 ---------------------- 00:23:39.890 Transport Type: 3 (TCP) 00:23:39.890 Address Family: 1 (IPv4) 00:23:39.890 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:39.890 Entry Flags: 00:23:39.890 Duplicate Returned Information: 1 00:23:39.890 Explicit Persistent Connection Support for Discovery: 1 00:23:39.890 Transport Requirements: 00:23:39.890 Secure Channel: Not Required 00:23:39.890 Port ID: 0 (0x0000) 00:23:39.890 Controller ID: 65535 (0xffff) 00:23:39.890 Admin Max SQ Size: 128 00:23:39.890 Transport Service Identifier: 4420 00:23:39.890 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:39.890 Transport Address: 10.0.0.2 00:23:39.890 Discovery Log Entry 1 00:23:39.890 ---------------------- 00:23:39.890 Transport Type: 3 (TCP) 00:23:39.890 Address Family: 1 (IPv4) 00:23:39.890 Subsystem Type: 2 (NVM Subsystem) 00:23:39.890 Entry Flags: 00:23:39.890 Duplicate Returned Information: 0 00:23:39.890 Explicit Persistent Connection Support for Discovery: 0 00:23:39.890 Transport Requirements: 00:23:39.890 Secure Channel: Not Required 00:23:39.890 Port ID: 0 (0x0000) 00:23:39.890 Controller ID: 65535 (0xffff) 00:23:39.890 Admin Max SQ Size: 128 00:23:39.890 Transport Service Identifier: 4420 00:23:39.890 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:39.890 Transport Address: 10.0.0.2 [2024-04-26 14:06:19.536461] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:39.890 [2024-04-26 14:06:19.536483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.890 [2024-04-26 14:06:19.536494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.890 [2024-04-26 14:06:19.536503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.890 [2024-04-26 14:06:19.536512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.890 [2024-04-26 14:06:19.536532] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.536540] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.536547] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:39.891 [2024-04-26 14:06:19.536564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.891 [2024-04-26 14:06:19.536596] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:39.891 [2024-04-26 14:06:19.536687] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.891 [2024-04-26 14:06:19.536696] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.891 [2024-04-26 14:06:19.536702] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.536709] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:39.891 [2024-04-26 14:06:19.536721] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.536728] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.536734] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:39.891 [2024-04-26 14:06:19.536745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.891 [2024-04-26 14:06:19.536773] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:39.891 [2024-04-26 14:06:19.536861] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.891 [2024-04-26 14:06:19.536869] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.891 [2024-04-26 14:06:19.536875] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.536880] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:39.891 [2024-04-26 14:06:19.536888] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:39.891 [2024-04-26 14:06:19.536896] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:39.891 [2024-04-26 14:06:19.536909] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.536915] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.536922] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:39.891 [2024-04-26 14:06:19.536936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.891 [2024-04-26 14:06:19.536954] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:39.891 [2024-04-26 14:06:19.537010] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.891 [2024-04-26 14:06:19.537018] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.891 [2024-04-26 14:06:19.537023] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537029] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:39.891 [2024-04-26 14:06:19.537042] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537049] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537054] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:39.891 [2024-04-26 14:06:19.537063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.891 [2024-04-26 14:06:19.537079] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:39.891 [2024-04-26 14:06:19.537137] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.891 [2024-04-26 14:06:19.537145] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.891 [2024-04-26 14:06:19.537150] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537168] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:39.891 [2024-04-26 14:06:19.537181] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537187] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537192] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:39.891 [2024-04-26 14:06:19.537202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.891 [2024-04-26 14:06:19.537219] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:39.891 [2024-04-26 14:06:19.537281] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.891 [2024-04-26 14:06:19.537289] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.891 [2024-04-26 14:06:19.537294] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537300] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:39.891 [2024-04-26 14:06:19.537312] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537318] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537323] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:39.891 [2024-04-26 14:06:19.537333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.891 [2024-04-26 14:06:19.537349] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:39.891 [2024-04-26 14:06:19.537398] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.891 [2024-04-26 14:06:19.537406] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.891 [2024-04-26 14:06:19.537411] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537417] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:39.891 [2024-04-26 14:06:19.537429] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537436] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537441] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:39.891 [2024-04-26 14:06:19.537450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.891 [2024-04-26 14:06:19.537466] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:39.891 [2024-04-26 14:06:19.537527] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.891 [2024-04-26 14:06:19.537536] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.891 [2024-04-26 14:06:19.537541] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537546] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:39.891 [2024-04-26 14:06:19.537558] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537564] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537570] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:39.891 [2024-04-26 14:06:19.537579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.891 [2024-04-26 14:06:19.537595] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:39.891 [2024-04-26 14:06:19.537647] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.891 [2024-04-26 14:06:19.537654] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.891 [2024-04-26 14:06:19.537660] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537665] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:39.891 [2024-04-26 14:06:19.537677] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537683] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537688] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:39.891 [2024-04-26 14:06:19.537697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.891 [2024-04-26 14:06:19.537714] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:39.891 [2024-04-26 14:06:19.537763] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.891 [2024-04-26 14:06:19.537770] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.891 [2024-04-26 14:06:19.537775] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537781] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:39.891 [2024-04-26 14:06:19.537793] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537799] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537804] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:39.891 [2024-04-26 14:06:19.537814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.891 [2024-04-26 14:06:19.537830] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:39.891 [2024-04-26 14:06:19.537900] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.891 [2024-04-26 14:06:19.537908] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.891 [2024-04-26 14:06:19.537914] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537920] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:39.891 [2024-04-26 14:06:19.537932] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537937] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.537943] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:39.891 [2024-04-26 14:06:19.537955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.891 [2024-04-26 14:06:19.537973] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:39.891 [2024-04-26 14:06:19.538030] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.891 [2024-04-26 14:06:19.538038] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.891 [2024-04-26 14:06:19.538043] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.538049] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:39.891 [2024-04-26 14:06:19.538061] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.538066] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.891 [2024-04-26 14:06:19.538072] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:39.891 [2024-04-26 14:06:19.538081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.892 [2024-04-26 14:06:19.538097] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:39.892 [2024-04-26 14:06:19.538148] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.892 [2024-04-26 14:06:19.542178] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.892 [2024-04-26 14:06:19.542193] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.892 [2024-04-26 14:06:19.542201] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:39.892 [2024-04-26 14:06:19.542235] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:39.892 [2024-04-26 14:06:19.542242] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:39.892 [2024-04-26 14:06:19.542248] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:39.892 [2024-04-26 14:06:19.542262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.892 [2024-04-26 14:06:19.542292] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:39.892 [2024-04-26 14:06:19.542380] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:39.892 [2024-04-26 14:06:19.542388] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:39.892 [2024-04-26 14:06:19.542394] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:39.892 [2024-04-26 14:06:19.542399] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:39.892 [2024-04-26 14:06:19.542410] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:23:40.152 00:23:40.153 14:06:19 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:40.153 [2024-04-26 14:06:19.646181] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:40.153 [2024-04-26 14:06:19.646261] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82584 ] 00:23:40.153 [2024-04-26 14:06:19.802267] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:40.153 [2024-04-26 14:06:19.802412] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:40.153 [2024-04-26 14:06:19.802428] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:40.153 [2024-04-26 14:06:19.802457] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:40.153 [2024-04-26 14:06:19.802470] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:40.153 [2024-04-26 14:06:19.802621] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:40.153 [2024-04-26 14:06:19.802671] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x614000002040 0 00:23:40.153 [2024-04-26 14:06:19.808185] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:40.153 [2024-04-26 14:06:19.808222] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:40.153 [2024-04-26 14:06:19.808231] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:40.153 [2024-04-26 14:06:19.808244] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:40.153 [2024-04-26 14:06:19.808327] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.808342] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.808354] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:40.153 [2024-04-26 14:06:19.808379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:40.153 [2024-04-26 14:06:19.808418] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:40.153 [2024-04-26 14:06:19.815330] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-04-26 14:06:19.815441] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-04-26 14:06:19.815470] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.815501] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:40.153 [2024-04-26 14:06:19.815596] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:40.153 [2024-04-26 14:06:19.815647] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:40.153 [2024-04-26 14:06:19.815684] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:40.153 [2024-04-26 14:06:19.815757] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.815782] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.815805] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:40.153 [2024-04-26 14:06:19.815860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.153 [2024-04-26 14:06:19.815986] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:40.153 [2024-04-26 14:06:19.816081] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-04-26 14:06:19.816131] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-04-26 14:06:19.816186] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.816211] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:40.153 [2024-04-26 14:06:19.816243] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:40.153 [2024-04-26 14:06:19.816294] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:40.153 [2024-04-26 14:06:19.816330] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.816352] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.816374] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:40.153 [2024-04-26 14:06:19.816422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.153 [2024-04-26 14:06:19.816493] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:40.153 [2024-04-26 14:06:19.816569] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-04-26 14:06:19.816598] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-04-26 14:06:19.816617] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.816637] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:40.153 [2024-04-26 14:06:19.816666] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:40.153 [2024-04-26 14:06:19.816705] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:40.153 [2024-04-26 14:06:19.816739] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.816761] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.816791] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:40.153 [2024-04-26 14:06:19.816827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.153 [2024-04-26 14:06:19.816889] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:40.153 [2024-04-26 14:06:19.816957] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-04-26 14:06:19.816986] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-04-26 14:06:19.817004] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.817024] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:40.153 [2024-04-26 14:06:19.817053] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:40.153 [2024-04-26 14:06:19.817096] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.817118] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.817139] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:40.153 [2024-04-26 14:06:19.817202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.153 [2024-04-26 14:06:19.817266] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:40.153 [2024-04-26 14:06:19.817353] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-04-26 14:06:19.817381] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-04-26 14:06:19.817399] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.817419] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:40.153 [2024-04-26 14:06:19.817446] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:40.153 [2024-04-26 14:06:19.817490] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:40.153 [2024-04-26 14:06:19.817527] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:40.153 [2024-04-26 14:06:19.817655] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:40.153 [2024-04-26 14:06:19.817677] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:40.153 [2024-04-26 14:06:19.817715] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.817737] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.817758] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:40.153 [2024-04-26 14:06:19.817803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.153 [2024-04-26 14:06:19.817905] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:40.153 [2024-04-26 14:06:19.817977] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-04-26 14:06:19.818005] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-04-26 14:06:19.818023] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.818043] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:40.153 [2024-04-26 14:06:19.818070] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:40.153 [2024-04-26 14:06:19.818124] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.818147] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.818205] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:40.153 [2024-04-26 14:06:19.818242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.153 [2024-04-26 14:06:19.818305] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:40.153 [2024-04-26 14:06:19.818375] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.153 [2024-04-26 14:06:19.818403] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.153 [2024-04-26 14:06:19.818421] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.153 [2024-04-26 14:06:19.818441] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:40.153 [2024-04-26 14:06:19.818477] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:40.153 [2024-04-26 14:06:19.818504] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:40.153 [2024-04-26 14:06:19.818582] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:40.153 [2024-04-26 14:06:19.818635] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:40.154 [2024-04-26 14:06:19.818708] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.818732] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:40.154 [2024-04-26 14:06:19.818779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.154 [2024-04-26 14:06:19.818847] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:40.154 [2024-04-26 14:06:19.818965] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.154 [2024-04-26 14:06:19.819009] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.154 [2024-04-26 14:06:19.819029] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.819052] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=0 00:23:40.154 [2024-04-26 14:06:19.819089] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:23:40.154 [2024-04-26 14:06:19.819116] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.823184] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.823239] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.823287] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.154 [2024-04-26 14:06:19.823330] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.154 [2024-04-26 14:06:19.823350] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.823371] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:40.154 [2024-04-26 14:06:19.823426] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:40.154 [2024-04-26 14:06:19.823454] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:40.154 [2024-04-26 14:06:19.823482] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:40.154 [2024-04-26 14:06:19.823524] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:40.154 [2024-04-26 14:06:19.823563] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:40.154 [2024-04-26 14:06:19.823593] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:40.154 [2024-04-26 14:06:19.823637] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:40.154 [2024-04-26 14:06:19.823681] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.823705] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.823726] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:40.154 [2024-04-26 14:06:19.823777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:40.154 [2024-04-26 14:06:19.823860] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:40.154 [2024-04-26 14:06:19.823946] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.154 [2024-04-26 14:06:19.823975] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.154 [2024-04-26 14:06:19.824003] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.824024] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:23:40.154 [2024-04-26 14:06:19.824061] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.824093] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.824114] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:23:40.154 [2024-04-26 14:06:19.824189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.154 [2024-04-26 14:06:19.824223] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.824243] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.824262] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x614000002040) 00:23:40.154 [2024-04-26 14:06:19.824293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.154 [2024-04-26 14:06:19.824320] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.824340] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.824367] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x614000002040) 00:23:40.154 [2024-04-26 14:06:19.824399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.154 [2024-04-26 14:06:19.824426] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.824446] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.824464] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.154 [2024-04-26 14:06:19.824495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.154 [2024-04-26 14:06:19.824519] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:40.154 [2024-04-26 14:06:19.824566] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:40.154 [2024-04-26 14:06:19.824598] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.824620] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:40.154 [2024-04-26 14:06:19.824655] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.154 [2024-04-26 14:06:19.824728] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:23:40.154 [2024-04-26 14:06:19.824765] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:23:40.154 [2024-04-26 14:06:19.824788] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:23:40.154 [2024-04-26 14:06:19.824811] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.154 [2024-04-26 14:06:19.824832] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:40.154 [2024-04-26 14:06:19.824867] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.154 [2024-04-26 14:06:19.824894] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.154 [2024-04-26 14:06:19.824913] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.824933] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:40.154 [2024-04-26 14:06:19.824971] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:40.154 [2024-04-26 14:06:19.825000] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:40.154 [2024-04-26 14:06:19.825046] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:40.154 [2024-04-26 14:06:19.825075] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:40.154 [2024-04-26 14:06:19.825105] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.825135] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.825186] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:40.154 [2024-04-26 14:06:19.825223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:40.154 [2024-04-26 14:06:19.825289] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:40.154 [2024-04-26 14:06:19.825352] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.154 [2024-04-26 14:06:19.825379] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.154 [2024-04-26 14:06:19.825398] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.825418] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:40.154 [2024-04-26 14:06:19.825651] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:40.154 [2024-04-26 14:06:19.825732] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:40.154 [2024-04-26 14:06:19.825788] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.154 [2024-04-26 14:06:19.825812] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:40.154 [2024-04-26 14:06:19.825878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.154 [2024-04-26 14:06:19.825944] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:40.154 [2024-04-26 14:06:19.826050] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.415 [2024-04-26 14:06:19.826078] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.415 [2024-04-26 14:06:19.826096] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.415 [2024-04-26 14:06:19.826118] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:23:40.415 [2024-04-26 14:06:19.826142] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:23:40.415 [2024-04-26 14:06:19.826205] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.415 [2024-04-26 14:06:19.826239] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.415 [2024-04-26 14:06:19.826259] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.415 [2024-04-26 14:06:19.826294] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.415 [2024-04-26 14:06:19.826321] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.415 [2024-04-26 14:06:19.826339] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.415 [2024-04-26 14:06:19.826359] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:40.415 [2024-04-26 14:06:19.826452] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:40.416 [2024-04-26 14:06:19.826519] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:40.416 [2024-04-26 14:06:19.826587] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:40.416 [2024-04-26 14:06:19.826638] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.826661] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:40.416 [2024-04-26 14:06:19.826705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.416 [2024-04-26 14:06:19.826781] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:40.416 [2024-04-26 14:06:19.826885] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.416 [2024-04-26 14:06:19.826913] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.416 [2024-04-26 14:06:19.826931] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.826951] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:23:40.416 [2024-04-26 14:06:19.826984] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:23:40.416 [2024-04-26 14:06:19.827007] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.827037] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.827056] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.827089] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.416 [2024-04-26 14:06:19.827116] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.416 [2024-04-26 14:06:19.827134] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.831192] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:40.416 [2024-04-26 14:06:19.831314] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:40.416 [2024-04-26 14:06:19.831364] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:40.416 [2024-04-26 14:06:19.831408] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.831424] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:40.416 [2024-04-26 14:06:19.831451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.416 [2024-04-26 14:06:19.831506] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:40.416 [2024-04-26 14:06:19.831607] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.416 [2024-04-26 14:06:19.831627] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.416 [2024-04-26 14:06:19.831639] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.831652] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:23:40.416 [2024-04-26 14:06:19.831668] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:23:40.416 [2024-04-26 14:06:19.831694] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.831715] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.831727] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.831769] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.416 [2024-04-26 14:06:19.831786] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.416 [2024-04-26 14:06:19.831798] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.831811] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:40.416 [2024-04-26 14:06:19.831875] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:40.416 [2024-04-26 14:06:19.831899] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:40.416 [2024-04-26 14:06:19.831930] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:40.416 [2024-04-26 14:06:19.831949] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:40.416 [2024-04-26 14:06:19.831973] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:40.416 [2024-04-26 14:06:19.831991] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:40.416 [2024-04-26 14:06:19.832008] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:40.416 [2024-04-26 14:06:19.832025] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:40.416 [2024-04-26 14:06:19.832099] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.832115] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:40.416 [2024-04-26 14:06:19.832139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.416 [2024-04-26 14:06:19.832183] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.832198] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.832211] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:23:40.416 [2024-04-26 14:06:19.832233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.416 [2024-04-26 14:06:19.832285] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:40.416 [2024-04-26 14:06:19.832310] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:23:40.416 [2024-04-26 14:06:19.832380] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.416 [2024-04-26 14:06:19.832404] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.416 [2024-04-26 14:06:19.832418] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.832438] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:40.416 [2024-04-26 14:06:19.832460] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.416 [2024-04-26 14:06:19.832477] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.416 [2024-04-26 14:06:19.832494] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.832507] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:23:40.416 [2024-04-26 14:06:19.832535] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.832548] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:23:40.416 [2024-04-26 14:06:19.832569] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.416 [2024-04-26 14:06:19.832607] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:23:40.416 [2024-04-26 14:06:19.832671] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.416 [2024-04-26 14:06:19.832695] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.416 [2024-04-26 14:06:19.832707] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.832720] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:23:40.416 [2024-04-26 14:06:19.832747] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.832760] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:23:40.416 [2024-04-26 14:06:19.832781] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.416 [2024-04-26 14:06:19.832825] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:23:40.416 [2024-04-26 14:06:19.832879] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.416 [2024-04-26 14:06:19.832903] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.416 [2024-04-26 14:06:19.832915] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.832928] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:23:40.416 [2024-04-26 14:06:19.832955] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.832968] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:23:40.416 [2024-04-26 14:06:19.832994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.416 [2024-04-26 14:06:19.833031] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:23:40.416 [2024-04-26 14:06:19.833094] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.416 [2024-04-26 14:06:19.833118] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.416 [2024-04-26 14:06:19.833130] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.833143] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:23:40.416 [2024-04-26 14:06:19.833216] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.833232] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:23:40.416 [2024-04-26 14:06:19.833255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.416 [2024-04-26 14:06:19.833278] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.833292] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:23:40.416 [2024-04-26 14:06:19.833314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.416 [2024-04-26 14:06:19.833342] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.833356] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x614000002040) 00:23:40.416 [2024-04-26 14:06:19.833378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.416 [2024-04-26 14:06:19.833404] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.416 [2024-04-26 14:06:19.833424] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x614000002040) 00:23:40.417 [2024-04-26 14:06:19.833445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.417 [2024-04-26 14:06:19.833496] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:23:40.417 [2024-04-26 14:06:19.833515] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:23:40.417 [2024-04-26 14:06:19.833529] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b940, cid 6, qid 0 00:23:40.417 [2024-04-26 14:06:19.833543] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:23:40.417 [2024-04-26 14:06:19.833688] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.417 [2024-04-26 14:06:19.833709] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.417 [2024-04-26 14:06:19.833722] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.833736] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=8192, cccid=5 00:23:40.417 [2024-04-26 14:06:19.833753] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b7e0) on tqpair(0x614000002040): expected_datao=0, payload_size=8192 00:23:40.417 [2024-04-26 14:06:19.833769] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.833848] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.833866] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.833891] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.417 [2024-04-26 14:06:19.833907] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.417 [2024-04-26 14:06:19.833919] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.833932] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=512, cccid=4 00:23:40.417 [2024-04-26 14:06:19.833947] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=512 00:23:40.417 [2024-04-26 14:06:19.833961] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.833990] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.834003] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.834029] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.417 [2024-04-26 14:06:19.834045] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.417 [2024-04-26 14:06:19.834057] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.834069] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=512, cccid=6 00:23:40.417 [2024-04-26 14:06:19.834083] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b940) on tqpair(0x614000002040): expected_datao=0, payload_size=512 00:23:40.417 [2024-04-26 14:06:19.834097] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.834120] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.834133] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.834149] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:40.417 [2024-04-26 14:06:19.834186] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:40.417 [2024-04-26 14:06:19.834203] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.834216] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=7 00:23:40.417 [2024-04-26 14:06:19.834231] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001baa0) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:23:40.417 [2024-04-26 14:06:19.834244] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.834263] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.834275] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.834291] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.417 [2024-04-26 14:06:19.834307] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.417 [2024-04-26 14:06:19.834319] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.834332] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:23:40.417 [2024-04-26 14:06:19.834382] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.417 [2024-04-26 14:06:19.834404] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.417 [2024-04-26 14:06:19.834418] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.834430] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:23:40.417 [2024-04-26 14:06:19.834474] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.417 [2024-04-26 14:06:19.834491] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.417 [2024-04-26 14:06:19.834503] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.834515] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b940) on tqpair=0x614000002040 00:23:40.417 [2024-04-26 14:06:19.834543] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.417 [2024-04-26 14:06:19.834572] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.417 [2024-04-26 14:06:19.834589] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.417 [2024-04-26 14:06:19.834602] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x614000002040 00:23:40.417 ===================================================== 00:23:40.417 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:40.417 ===================================================== 00:23:40.417 Controller Capabilities/Features 00:23:40.417 ================================ 00:23:40.417 Vendor ID: 8086 00:23:40.417 Subsystem Vendor ID: 8086 00:23:40.417 Serial Number: SPDK00000000000001 00:23:40.417 Model Number: SPDK bdev Controller 00:23:40.417 Firmware Version: 24.05 00:23:40.417 Recommended Arb Burst: 6 00:23:40.417 IEEE OUI Identifier: e4 d2 5c 00:23:40.417 Multi-path I/O 00:23:40.417 May have multiple subsystem ports: Yes 00:23:40.417 May have multiple controllers: Yes 00:23:40.417 Associated with SR-IOV VF: No 00:23:40.417 Max Data Transfer Size: 131072 00:23:40.417 Max Number of Namespaces: 32 00:23:40.417 Max Number of I/O Queues: 127 00:23:40.417 NVMe Specification Version (VS): 1.3 00:23:40.417 NVMe Specification Version (Identify): 1.3 00:23:40.417 Maximum Queue Entries: 128 00:23:40.417 Contiguous Queues Required: Yes 00:23:40.417 Arbitration Mechanisms Supported 00:23:40.417 Weighted Round Robin: Not Supported 00:23:40.417 Vendor Specific: Not Supported 00:23:40.417 Reset Timeout: 15000 ms 00:23:40.417 Doorbell Stride: 4 bytes 00:23:40.417 NVM Subsystem Reset: Not Supported 00:23:40.417 Command Sets Supported 00:23:40.417 NVM Command Set: Supported 00:23:40.417 Boot Partition: Not Supported 00:23:40.417 Memory Page Size Minimum: 4096 bytes 00:23:40.417 Memory Page Size Maximum: 4096 bytes 00:23:40.417 Persistent Memory Region: Not Supported 00:23:40.417 Optional Asynchronous Events Supported 00:23:40.417 Namespace Attribute Notices: Supported 00:23:40.417 Firmware Activation Notices: Not Supported 00:23:40.417 ANA Change Notices: Not Supported 00:23:40.417 PLE Aggregate Log Change Notices: Not Supported 00:23:40.417 LBA Status Info Alert Notices: Not Supported 00:23:40.417 EGE Aggregate Log Change Notices: Not Supported 00:23:40.417 Normal NVM Subsystem Shutdown event: Not Supported 00:23:40.417 Zone Descriptor Change Notices: Not Supported 00:23:40.417 Discovery Log Change Notices: Not Supported 00:23:40.417 Controller Attributes 00:23:40.417 128-bit Host Identifier: Supported 00:23:40.417 Non-Operational Permissive Mode: Not Supported 00:23:40.417 NVM Sets: Not Supported 00:23:40.417 Read Recovery Levels: Not Supported 00:23:40.417 Endurance Groups: Not Supported 00:23:40.417 Predictable Latency Mode: Not Supported 00:23:40.417 Traffic Based Keep ALive: Not Supported 00:23:40.417 Namespace Granularity: Not Supported 00:23:40.417 SQ Associations: Not Supported 00:23:40.417 UUID List: Not Supported 00:23:40.417 Multi-Domain Subsystem: Not Supported 00:23:40.417 Fixed Capacity Management: Not Supported 00:23:40.417 Variable Capacity Management: Not Supported 00:23:40.417 Delete Endurance Group: Not Supported 00:23:40.417 Delete NVM Set: Not Supported 00:23:40.417 Extended LBA Formats Supported: Not Supported 00:23:40.417 Flexible Data Placement Supported: Not Supported 00:23:40.417 00:23:40.417 Controller Memory Buffer Support 00:23:40.417 ================================ 00:23:40.417 Supported: No 00:23:40.417 00:23:40.417 Persistent Memory Region Support 00:23:40.417 ================================ 00:23:40.417 Supported: No 00:23:40.417 00:23:40.417 Admin Command Set Attributes 00:23:40.417 ============================ 00:23:40.417 Security Send/Receive: Not Supported 00:23:40.417 Format NVM: Not Supported 00:23:40.417 Firmware Activate/Download: Not Supported 00:23:40.417 Namespace Management: Not Supported 00:23:40.417 Device Self-Test: Not Supported 00:23:40.417 Directives: Not Supported 00:23:40.417 NVMe-MI: Not Supported 00:23:40.417 Virtualization Management: Not Supported 00:23:40.417 Doorbell Buffer Config: Not Supported 00:23:40.417 Get LBA Status Capability: Not Supported 00:23:40.417 Command & Feature Lockdown Capability: Not Supported 00:23:40.417 Abort Command Limit: 4 00:23:40.417 Async Event Request Limit: 4 00:23:40.417 Number of Firmware Slots: N/A 00:23:40.417 Firmware Slot 1 Read-Only: N/A 00:23:40.417 Firmware Activation Without Reset: N/A 00:23:40.417 Multiple Update Detection Support: N/A 00:23:40.417 Firmware Update Granularity: No Information Provided 00:23:40.417 Per-Namespace SMART Log: No 00:23:40.417 Asymmetric Namespace Access Log Page: Not Supported 00:23:40.418 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:40.418 Command Effects Log Page: Supported 00:23:40.418 Get Log Page Extended Data: Supported 00:23:40.418 Telemetry Log Pages: Not Supported 00:23:40.418 Persistent Event Log Pages: Not Supported 00:23:40.418 Supported Log Pages Log Page: May Support 00:23:40.418 Commands Supported & Effects Log Page: Not Supported 00:23:40.418 Feature Identifiers & Effects Log Page:May Support 00:23:40.418 NVMe-MI Commands & Effects Log Page: May Support 00:23:40.418 Data Area 4 for Telemetry Log: Not Supported 00:23:40.418 Error Log Page Entries Supported: 128 00:23:40.418 Keep Alive: Supported 00:23:40.418 Keep Alive Granularity: 10000 ms 00:23:40.418 00:23:40.418 NVM Command Set Attributes 00:23:40.418 ========================== 00:23:40.418 Submission Queue Entry Size 00:23:40.418 Max: 64 00:23:40.418 Min: 64 00:23:40.418 Completion Queue Entry Size 00:23:40.418 Max: 16 00:23:40.418 Min: 16 00:23:40.418 Number of Namespaces: 32 00:23:40.418 Compare Command: Supported 00:23:40.418 Write Uncorrectable Command: Not Supported 00:23:40.418 Dataset Management Command: Supported 00:23:40.418 Write Zeroes Command: Supported 00:23:40.418 Set Features Save Field: Not Supported 00:23:40.418 Reservations: Supported 00:23:40.418 Timestamp: Not Supported 00:23:40.418 Copy: Supported 00:23:40.418 Volatile Write Cache: Present 00:23:40.418 Atomic Write Unit (Normal): 1 00:23:40.418 Atomic Write Unit (PFail): 1 00:23:40.418 Atomic Compare & Write Unit: 1 00:23:40.418 Fused Compare & Write: Supported 00:23:40.418 Scatter-Gather List 00:23:40.418 SGL Command Set: Supported 00:23:40.418 SGL Keyed: Supported 00:23:40.418 SGL Bit Bucket Descriptor: Not Supported 00:23:40.418 SGL Metadata Pointer: Not Supported 00:23:40.418 Oversized SGL: Not Supported 00:23:40.418 SGL Metadata Address: Not Supported 00:23:40.418 SGL Offset: Supported 00:23:40.418 Transport SGL Data Block: Not Supported 00:23:40.418 Replay Protected Memory Block: Not Supported 00:23:40.418 00:23:40.418 Firmware Slot Information 00:23:40.418 ========================= 00:23:40.418 Active slot: 1 00:23:40.418 Slot 1 Firmware Revision: 24.05 00:23:40.418 00:23:40.418 00:23:40.418 Commands Supported and Effects 00:23:40.418 ============================== 00:23:40.418 Admin Commands 00:23:40.418 -------------- 00:23:40.418 Get Log Page (02h): Supported 00:23:40.418 Identify (06h): Supported 00:23:40.418 Abort (08h): Supported 00:23:40.418 Set Features (09h): Supported 00:23:40.418 Get Features (0Ah): Supported 00:23:40.418 Asynchronous Event Request (0Ch): Supported 00:23:40.418 Keep Alive (18h): Supported 00:23:40.418 I/O Commands 00:23:40.418 ------------ 00:23:40.418 Flush (00h): Supported LBA-Change 00:23:40.418 Write (01h): Supported LBA-Change 00:23:40.418 Read (02h): Supported 00:23:40.418 Compare (05h): Supported 00:23:40.418 Write Zeroes (08h): Supported LBA-Change 00:23:40.418 Dataset Management (09h): Supported LBA-Change 00:23:40.418 Copy (19h): Supported LBA-Change 00:23:40.418 Unknown (79h): Supported LBA-Change 00:23:40.418 Unknown (7Ah): Supported 00:23:40.418 00:23:40.418 Error Log 00:23:40.418 ========= 00:23:40.418 00:23:40.418 Arbitration 00:23:40.418 =========== 00:23:40.418 Arbitration Burst: 1 00:23:40.418 00:23:40.418 Power Management 00:23:40.418 ================ 00:23:40.418 Number of Power States: 1 00:23:40.418 Current Power State: Power State #0 00:23:40.418 Power State #0: 00:23:40.418 Max Power: 0.00 W 00:23:40.418 Non-Operational State: Operational 00:23:40.418 Entry Latency: Not Reported 00:23:40.418 Exit Latency: Not Reported 00:23:40.418 Relative Read Throughput: 0 00:23:40.418 Relative Read Latency: 0 00:23:40.418 Relative Write Throughput: 0 00:23:40.418 Relative Write Latency: 0 00:23:40.418 Idle Power: Not Reported 00:23:40.418 Active Power: Not Reported 00:23:40.418 Non-Operational Permissive Mode: Not Supported 00:23:40.418 00:23:40.418 Health Information 00:23:40.418 ================== 00:23:40.418 Critical Warnings: 00:23:40.418 Available Spare Space: OK 00:23:40.418 Temperature: OK 00:23:40.418 Device Reliability: OK 00:23:40.418 Read Only: No 00:23:40.418 Volatile Memory Backup: OK 00:23:40.418 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:40.418 Temperature Threshold: [2024-04-26 14:06:19.834925] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.418 [2024-04-26 14:06:19.834943] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x614000002040) 00:23:40.418 [2024-04-26 14:06:19.834970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.418 [2024-04-26 14:06:19.835026] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:23:40.418 [2024-04-26 14:06:19.835113] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.418 [2024-04-26 14:06:19.835132] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.418 [2024-04-26 14:06:19.839170] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.418 [2024-04-26 14:06:19.839213] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x614000002040 00:23:40.418 [2024-04-26 14:06:19.839380] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:40.418 [2024-04-26 14:06:19.839426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.418 [2024-04-26 14:06:19.839474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.418 [2024-04-26 14:06:19.839495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.418 [2024-04-26 14:06:19.839514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.418 [2024-04-26 14:06:19.839540] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.418 [2024-04-26 14:06:19.839555] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.418 [2024-04-26 14:06:19.839569] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.418 [2024-04-26 14:06:19.839595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.418 [2024-04-26 14:06:19.839653] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.418 [2024-04-26 14:06:19.839738] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.418 [2024-04-26 14:06:19.839759] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.418 [2024-04-26 14:06:19.839772] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.418 [2024-04-26 14:06:19.839786] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:40.418 [2024-04-26 14:06:19.839816] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.418 [2024-04-26 14:06:19.839831] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.418 [2024-04-26 14:06:19.839844] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.418 [2024-04-26 14:06:19.839868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.418 [2024-04-26 14:06:19.839916] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.418 [2024-04-26 14:06:19.840006] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.418 [2024-04-26 14:06:19.840024] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.418 [2024-04-26 14:06:19.840036] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.418 [2024-04-26 14:06:19.840049] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:40.418 [2024-04-26 14:06:19.840066] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:40.418 [2024-04-26 14:06:19.840083] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:40.418 [2024-04-26 14:06:19.840118] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.418 [2024-04-26 14:06:19.840142] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.418 [2024-04-26 14:06:19.840182] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.418 [2024-04-26 14:06:19.840206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.418 [2024-04-26 14:06:19.840248] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.418 [2024-04-26 14:06:19.840324] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.418 [2024-04-26 14:06:19.840344] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.418 [2024-04-26 14:06:19.840356] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.418 [2024-04-26 14:06:19.840369] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:40.418 [2024-04-26 14:06:19.840399] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.418 [2024-04-26 14:06:19.840418] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.418 [2024-04-26 14:06:19.840431] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.418 [2024-04-26 14:06:19.840452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.418 [2024-04-26 14:06:19.840490] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.418 [2024-04-26 14:06:19.840550] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.418 [2024-04-26 14:06:19.840568] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.418 [2024-04-26 14:06:19.840579] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.840592] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:40.419 [2024-04-26 14:06:19.840619] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.840639] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.840652] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.419 [2024-04-26 14:06:19.840673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.419 [2024-04-26 14:06:19.840716] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.419 [2024-04-26 14:06:19.840776] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.419 [2024-04-26 14:06:19.840793] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.419 [2024-04-26 14:06:19.840805] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.840817] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:40.419 [2024-04-26 14:06:19.840845] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.840864] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.840876] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.419 [2024-04-26 14:06:19.840902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.419 [2024-04-26 14:06:19.840939] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.419 [2024-04-26 14:06:19.840995] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.419 [2024-04-26 14:06:19.841013] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.419 [2024-04-26 14:06:19.841025] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.841037] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:40.419 [2024-04-26 14:06:19.841064] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.841083] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.841096] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.419 [2024-04-26 14:06:19.841117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.419 [2024-04-26 14:06:19.841172] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.419 [2024-04-26 14:06:19.841226] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.419 [2024-04-26 14:06:19.841243] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.419 [2024-04-26 14:06:19.841256] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.841268] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:40.419 [2024-04-26 14:06:19.841296] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.841316] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.841329] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.419 [2024-04-26 14:06:19.841350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.419 [2024-04-26 14:06:19.841388] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.419 [2024-04-26 14:06:19.841440] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.419 [2024-04-26 14:06:19.841459] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.419 [2024-04-26 14:06:19.841470] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.841482] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:40.419 [2024-04-26 14:06:19.841509] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.841529] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.841541] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.419 [2024-04-26 14:06:19.841562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.419 [2024-04-26 14:06:19.841598] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.419 [2024-04-26 14:06:19.841676] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.419 [2024-04-26 14:06:19.841688] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.419 [2024-04-26 14:06:19.841696] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.841705] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:40.419 [2024-04-26 14:06:19.841723] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.841741] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.841749] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.419 [2024-04-26 14:06:19.841763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.419 [2024-04-26 14:06:19.841789] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.419 [2024-04-26 14:06:19.841858] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.419 [2024-04-26 14:06:19.841871] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.419 [2024-04-26 14:06:19.841879] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.841888] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:40.419 [2024-04-26 14:06:19.841910] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.841919] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.841928] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.419 [2024-04-26 14:06:19.841942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.419 [2024-04-26 14:06:19.841967] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.419 [2024-04-26 14:06:19.842026] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.419 [2024-04-26 14:06:19.842038] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.419 [2024-04-26 14:06:19.842051] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.842059] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:40.419 [2024-04-26 14:06:19.842078] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.842086] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.842095] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.419 [2024-04-26 14:06:19.842108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.419 [2024-04-26 14:06:19.842133] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.419 [2024-04-26 14:06:19.842200] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.419 [2024-04-26 14:06:19.842217] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.419 [2024-04-26 14:06:19.842226] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.842235] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:40.419 [2024-04-26 14:06:19.842259] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.842268] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.842276] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.419 [2024-04-26 14:06:19.842290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.419 [2024-04-26 14:06:19.842318] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.419 [2024-04-26 14:06:19.842387] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.419 [2024-04-26 14:06:19.842400] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.419 [2024-04-26 14:06:19.842408] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.842416] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:40.419 [2024-04-26 14:06:19.842435] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.842444] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.842451] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.419 [2024-04-26 14:06:19.842476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.419 [2024-04-26 14:06:19.842503] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.419 [2024-04-26 14:06:19.842559] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.419 [2024-04-26 14:06:19.842583] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.419 [2024-04-26 14:06:19.842591] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.842599] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:40.419 [2024-04-26 14:06:19.842618] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.842627] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.842635] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.419 [2024-04-26 14:06:19.842650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.419 [2024-04-26 14:06:19.842678] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.419 [2024-04-26 14:06:19.842741] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.419 [2024-04-26 14:06:19.842753] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.419 [2024-04-26 14:06:19.842761] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.842769] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:40.419 [2024-04-26 14:06:19.842787] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.842796] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.419 [2024-04-26 14:06:19.842804] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.419 [2024-04-26 14:06:19.842818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.419 [2024-04-26 14:06:19.842844] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.419 [2024-04-26 14:06:19.842897] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.420 [2024-04-26 14:06:19.842913] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.420 [2024-04-26 14:06:19.842922] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.420 [2024-04-26 14:06:19.842930] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:40.420 [2024-04-26 14:06:19.842948] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.420 [2024-04-26 14:06:19.842957] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.420 [2024-04-26 14:06:19.842965] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.420 [2024-04-26 14:06:19.842988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.420 [2024-04-26 14:06:19.843014] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.420 [2024-04-26 14:06:19.843070] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.420 [2024-04-26 14:06:19.843082] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.420 [2024-04-26 14:06:19.843090] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.420 [2024-04-26 14:06:19.843098] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:40.420 [2024-04-26 14:06:19.843116] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:40.420 [2024-04-26 14:06:19.843125] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:40.420 [2024-04-26 14:06:19.843138] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:23:40.420 [2024-04-26 14:06:19.847174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:40.420 [2024-04-26 14:06:19.847239] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:23:40.420 [2024-04-26 14:06:19.847320] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:40.420 [2024-04-26 14:06:19.847335] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:40.420 [2024-04-26 14:06:19.847344] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:40.420 [2024-04-26 14:06:19.847353] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:23:40.420 [2024-04-26 14:06:19.847377] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:23:40.420 0 Kelvin (-273 Celsius) 00:23:40.420 Available Spare: 0% 00:23:40.420 Available Spare Threshold: 0% 00:23:40.420 Life Percentage Used: 0% 00:23:40.420 Data Units Read: 0 00:23:40.420 Data Units Written: 0 00:23:40.420 Host Read Commands: 0 00:23:40.420 Host Write Commands: 0 00:23:40.420 Controller Busy Time: 0 minutes 00:23:40.420 Power Cycles: 0 00:23:40.420 Power On Hours: 0 hours 00:23:40.420 Unsafe Shutdowns: 0 00:23:40.420 Unrecoverable Media Errors: 0 00:23:40.420 Lifetime Error Log Entries: 0 00:23:40.420 Warning Temperature Time: 0 minutes 00:23:40.420 Critical Temperature Time: 0 minutes 00:23:40.420 00:23:40.420 Number of Queues 00:23:40.420 ================ 00:23:40.420 Number of I/O Submission Queues: 127 00:23:40.420 Number of I/O Completion Queues: 127 00:23:40.420 00:23:40.420 Active Namespaces 00:23:40.420 ================= 00:23:40.420 Namespace ID:1 00:23:40.420 Error Recovery Timeout: Unlimited 00:23:40.420 Command Set Identifier: NVM (00h) 00:23:40.420 Deallocate: Supported 00:23:40.420 Deallocated/Unwritten Error: Not Supported 00:23:40.420 Deallocated Read Value: Unknown 00:23:40.420 Deallocate in Write Zeroes: Not Supported 00:23:40.420 Deallocated Guard Field: 0xFFFF 00:23:40.420 Flush: Supported 00:23:40.420 Reservation: Supported 00:23:40.420 Namespace Sharing Capabilities: Multiple Controllers 00:23:40.420 Size (in LBAs): 131072 (0GiB) 00:23:40.420 Capacity (in LBAs): 131072 (0GiB) 00:23:40.420 Utilization (in LBAs): 131072 (0GiB) 00:23:40.420 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:40.420 EUI64: ABCDEF0123456789 00:23:40.420 UUID: f3ecdb21-14fa-42ae-8871-713ea4001e3b 00:23:40.420 Thin Provisioning: Not Supported 00:23:40.420 Per-NS Atomic Units: Yes 00:23:40.420 Atomic Boundary Size (Normal): 0 00:23:40.420 Atomic Boundary Size (PFail): 0 00:23:40.420 Atomic Boundary Offset: 0 00:23:40.420 Maximum Single Source Range Length: 65535 00:23:40.420 Maximum Copy Length: 65535 00:23:40.420 Maximum Source Range Count: 1 00:23:40.420 NGUID/EUI64 Never Reused: No 00:23:40.420 Namespace Write Protected: No 00:23:40.420 Number of LBA Formats: 1 00:23:40.420 Current LBA Format: LBA Format #00 00:23:40.420 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:40.420 00:23:40.420 14:06:19 -- host/identify.sh@51 -- # sync 00:23:41.358 14:06:20 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:41.358 14:06:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:41.359 14:06:20 -- common/autotest_common.sh@10 -- # set +x 00:23:41.359 14:06:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:41.359 14:06:20 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:41.359 14:06:20 -- host/identify.sh@56 -- # nvmftestfini 00:23:41.359 14:06:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:41.359 14:06:20 -- nvmf/common.sh@117 -- # sync 00:23:41.359 14:06:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:41.359 14:06:20 -- nvmf/common.sh@120 -- # set +e 00:23:41.359 14:06:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:41.359 14:06:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:41.359 rmmod nvme_tcp 00:23:41.359 rmmod nvme_fabrics 00:23:41.359 rmmod nvme_keyring 00:23:41.359 14:06:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:41.359 14:06:20 -- nvmf/common.sh@124 -- # set -e 00:23:41.359 14:06:20 -- nvmf/common.sh@125 -- # return 0 00:23:41.359 14:06:20 -- nvmf/common.sh@478 -- # '[' -n 82518 ']' 00:23:41.359 14:06:20 -- nvmf/common.sh@479 -- # killprocess 82518 00:23:41.359 14:06:20 -- common/autotest_common.sh@936 -- # '[' -z 82518 ']' 00:23:41.359 14:06:20 -- common/autotest_common.sh@940 -- # kill -0 82518 00:23:41.359 14:06:20 -- common/autotest_common.sh@941 -- # uname 00:23:41.359 14:06:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:41.359 14:06:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82518 00:23:41.359 killing process with pid 82518 00:23:41.359 14:06:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:41.359 14:06:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:41.359 14:06:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82518' 00:23:41.359 14:06:20 -- common/autotest_common.sh@955 -- # kill 82518 00:23:41.359 [2024-04-26 14:06:20.820060] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:23:41.359 14:06:20 -- common/autotest_common.sh@960 -- # wait 82518 00:23:43.263 14:06:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:43.263 14:06:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:43.263 14:06:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:43.263 14:06:22 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:43.263 14:06:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:43.263 14:06:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.263 14:06:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.263 14:06:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.263 14:06:22 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:43.263 00:23:43.263 real 0m5.166s 00:23:43.263 user 0m14.595s 00:23:43.263 sys 0m1.079s 00:23:43.263 ************************************ 00:23:43.263 END TEST nvmf_identify 00:23:43.263 ************************************ 00:23:43.263 14:06:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:43.264 14:06:22 -- common/autotest_common.sh@10 -- # set +x 00:23:43.264 14:06:22 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:43.264 14:06:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:43.264 14:06:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:43.264 14:06:22 -- common/autotest_common.sh@10 -- # set +x 00:23:43.264 ************************************ 00:23:43.264 START TEST nvmf_perf 00:23:43.264 ************************************ 00:23:43.264 14:06:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:43.264 * Looking for test storage... 00:23:43.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:43.264 14:06:22 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:43.264 14:06:22 -- nvmf/common.sh@7 -- # uname -s 00:23:43.264 14:06:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.264 14:06:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.264 14:06:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.264 14:06:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.264 14:06:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.264 14:06:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.264 14:06:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.264 14:06:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.264 14:06:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.264 14:06:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.264 14:06:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:23:43.264 14:06:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:23:43.264 14:06:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.264 14:06:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.264 14:06:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:43.264 14:06:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.264 14:06:22 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:43.264 14:06:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.264 14:06:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.264 14:06:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.264 14:06:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.264 14:06:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.264 14:06:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.264 14:06:22 -- paths/export.sh@5 -- # export PATH 00:23:43.264 14:06:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.264 14:06:22 -- nvmf/common.sh@47 -- # : 0 00:23:43.264 14:06:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.264 14:06:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.264 14:06:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.264 14:06:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.264 14:06:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.264 14:06:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.264 14:06:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.264 14:06:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.264 14:06:22 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:43.264 14:06:22 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:43.264 14:06:22 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:43.264 14:06:22 -- host/perf.sh@17 -- # nvmftestinit 00:23:43.264 14:06:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:43.264 14:06:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.264 14:06:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:43.264 14:06:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:43.264 14:06:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:43.264 14:06:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.264 14:06:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.264 14:06:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.264 14:06:22 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:43.264 14:06:22 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:43.264 14:06:22 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:43.264 14:06:22 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:43.264 14:06:22 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:43.264 14:06:22 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:43.264 14:06:22 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.264 14:06:22 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.264 14:06:22 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:43.264 14:06:22 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:43.264 14:06:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:43.264 14:06:22 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:43.264 14:06:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:43.264 14:06:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.264 14:06:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:43.264 14:06:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:43.264 14:06:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:43.264 14:06:22 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:43.264 14:06:22 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:43.264 14:06:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:43.264 Cannot find device "nvmf_tgt_br" 00:23:43.264 14:06:22 -- nvmf/common.sh@155 -- # true 00:23:43.264 14:06:22 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:43.523 Cannot find device "nvmf_tgt_br2" 00:23:43.523 14:06:22 -- nvmf/common.sh@156 -- # true 00:23:43.523 14:06:22 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:43.523 14:06:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:43.523 Cannot find device "nvmf_tgt_br" 00:23:43.523 14:06:22 -- nvmf/common.sh@158 -- # true 00:23:43.523 14:06:22 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:43.523 Cannot find device "nvmf_tgt_br2" 00:23:43.523 14:06:22 -- nvmf/common.sh@159 -- # true 00:23:43.523 14:06:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:43.523 14:06:23 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:43.523 14:06:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:43.523 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:43.523 14:06:23 -- nvmf/common.sh@162 -- # true 00:23:43.523 14:06:23 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:43.523 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:43.523 14:06:23 -- nvmf/common.sh@163 -- # true 00:23:43.523 14:06:23 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:43.523 14:06:23 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:43.523 14:06:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:43.523 14:06:23 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:43.523 14:06:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:43.523 14:06:23 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:43.523 14:06:23 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:43.523 14:06:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:43.523 14:06:23 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:43.523 14:06:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:43.523 14:06:23 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:43.523 14:06:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:43.523 14:06:23 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:43.523 14:06:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:43.523 14:06:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:43.523 14:06:23 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:43.523 14:06:23 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:43.782 14:06:23 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:43.782 14:06:23 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:43.782 14:06:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:43.782 14:06:23 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:43.782 14:06:23 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:43.782 14:06:23 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:43.782 14:06:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:43.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:23:43.782 00:23:43.782 --- 10.0.0.2 ping statistics --- 00:23:43.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.782 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:23:43.782 14:06:23 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:43.782 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:43.782 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:23:43.782 00:23:43.782 --- 10.0.0.3 ping statistics --- 00:23:43.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.782 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:23:43.782 14:06:23 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:43.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:23:43.782 00:23:43.782 --- 10.0.0.1 ping statistics --- 00:23:43.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.782 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:23:43.782 14:06:23 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.782 14:06:23 -- nvmf/common.sh@422 -- # return 0 00:23:43.782 14:06:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:43.782 14:06:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.782 14:06:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:43.782 14:06:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:43.782 14:06:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.782 14:06:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:43.782 14:06:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:43.782 14:06:23 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:43.782 14:06:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:43.782 14:06:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:43.782 14:06:23 -- common/autotest_common.sh@10 -- # set +x 00:23:43.782 14:06:23 -- nvmf/common.sh@470 -- # nvmfpid=82779 00:23:43.782 14:06:23 -- nvmf/common.sh@471 -- # waitforlisten 82779 00:23:43.782 14:06:23 -- common/autotest_common.sh@817 -- # '[' -z 82779 ']' 00:23:43.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.782 14:06:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.782 14:06:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:43.782 14:06:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.782 14:06:23 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:43.782 14:06:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:43.782 14:06:23 -- common/autotest_common.sh@10 -- # set +x 00:23:43.782 [2024-04-26 14:06:23.432011] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:23:43.782 [2024-04-26 14:06:23.432133] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.040 [2024-04-26 14:06:23.598115] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.297 [2024-04-26 14:06:23.894575] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.297 [2024-04-26 14:06:23.894649] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.297 [2024-04-26 14:06:23.894668] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.297 [2024-04-26 14:06:23.894680] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.297 [2024-04-26 14:06:23.894695] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.297 [2024-04-26 14:06:23.894981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.297 [2024-04-26 14:06:23.895196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.297 [2024-04-26 14:06:23.895931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.297 [2024-04-26 14:06:23.895964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:44.865 14:06:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:44.865 14:06:24 -- common/autotest_common.sh@850 -- # return 0 00:23:44.865 14:06:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:44.865 14:06:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:44.865 14:06:24 -- common/autotest_common.sh@10 -- # set +x 00:23:44.865 14:06:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.865 14:06:24 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:44.865 14:06:24 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:23:45.432 14:06:24 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:23:45.432 14:06:24 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:45.432 14:06:25 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:23:45.432 14:06:25 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:46.000 14:06:25 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:46.000 14:06:25 -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:23:46.000 14:06:25 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:46.000 14:06:25 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:46.000 14:06:25 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:46.000 [2024-04-26 14:06:25.545708] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.000 14:06:25 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:46.258 14:06:25 -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:46.258 14:06:25 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:46.517 14:06:25 -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:46.517 14:06:25 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:46.517 14:06:26 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:46.776 [2024-04-26 14:06:26.311915] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.776 14:06:26 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:47.047 14:06:26 -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:47.047 14:06:26 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:47.047 14:06:26 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:47.047 14:06:26 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:23:48.426 Initializing NVMe Controllers 00:23:48.426 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:23:48.426 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:23:48.426 Initialization complete. Launching workers. 00:23:48.426 ======================================================== 00:23:48.426 Latency(us) 00:23:48.426 Device Information : IOPS MiB/s Average min max 00:23:48.426 PCIE (0000:00:10.0) NSID 1 from core 0: 16489.21 64.41 1941.20 419.17 8353.41 00:23:48.426 ======================================================== 00:23:48.426 Total : 16489.21 64.41 1941.20 419.17 8353.41 00:23:48.426 00:23:48.426 14:06:27 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:49.869 Initializing NVMe Controllers 00:23:49.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:49.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:49.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:49.869 Initialization complete. Launching workers. 00:23:49.869 ======================================================== 00:23:49.869 Latency(us) 00:23:49.869 Device Information : IOPS MiB/s Average min max 00:23:49.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3413.96 13.34 292.71 109.33 4305.30 00:23:49.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8174.91 7928.55 12045.35 00:23:49.869 ======================================================== 00:23:49.869 Total : 3536.95 13.82 566.81 109.33 12045.35 00:23:49.869 00:23:49.869 14:06:29 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:51.247 Initializing NVMe Controllers 00:23:51.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:51.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:51.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:51.247 Initialization complete. Launching workers. 00:23:51.247 ======================================================== 00:23:51.247 Latency(us) 00:23:51.247 Device Information : IOPS MiB/s Average min max 00:23:51.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8807.99 34.41 3649.12 638.85 8993.22 00:23:51.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2600.00 10.16 12380.22 6065.84 24024.13 00:23:51.247 ======================================================== 00:23:51.247 Total : 11407.99 44.56 5639.03 638.85 24024.13 00:23:51.247 00:23:51.247 14:06:30 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:23:51.247 14:06:30 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:54.528 Initializing NVMe Controllers 00:23:54.528 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:54.528 Controller IO queue size 128, less than required. 00:23:54.528 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:54.528 Controller IO queue size 128, less than required. 00:23:54.528 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:54.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:54.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:54.528 Initialization complete. Launching workers. 00:23:54.528 ======================================================== 00:23:54.528 Latency(us) 00:23:54.528 Device Information : IOPS MiB/s Average min max 00:23:54.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1524.37 381.09 87295.61 48476.36 303198.37 00:23:54.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 524.77 131.19 257634.18 129014.79 512356.55 00:23:54.528 ======================================================== 00:23:54.528 Total : 2049.14 512.28 130917.89 48476.36 512356.55 00:23:54.528 00:23:54.528 14:06:33 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:54.528 No valid NVMe controllers or AIO or URING devices found 00:23:54.528 Initializing NVMe Controllers 00:23:54.528 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:54.528 Controller IO queue size 128, less than required. 00:23:54.529 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:54.529 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:54.529 Controller IO queue size 128, less than required. 00:23:54.529 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:54.529 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:23:54.529 WARNING: Some requested NVMe devices were skipped 00:23:54.529 14:06:33 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:57.820 Initializing NVMe Controllers 00:23:57.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:57.820 Controller IO queue size 128, less than required. 00:23:57.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.820 Controller IO queue size 128, less than required. 00:23:57.820 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:57.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:57.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:57.820 Initialization complete. Launching workers. 00:23:57.820 00:23:57.820 ==================== 00:23:57.820 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:57.820 TCP transport: 00:23:57.820 polls: 5548 00:23:57.820 idle_polls: 2711 00:23:57.820 sock_completions: 2837 00:23:57.820 nvme_completions: 5721 00:23:57.820 submitted_requests: 8634 00:23:57.820 queued_requests: 1 00:23:57.820 00:23:57.820 ==================== 00:23:57.820 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:57.820 TCP transport: 00:23:57.820 polls: 6492 00:23:57.820 idle_polls: 3699 00:23:57.820 sock_completions: 2793 00:23:57.820 nvme_completions: 5899 00:23:57.820 submitted_requests: 8950 00:23:57.820 queued_requests: 1 00:23:57.820 ======================================================== 00:23:57.820 Latency(us) 00:23:57.820 Device Information : IOPS MiB/s Average min max 00:23:57.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1427.67 356.92 93777.64 56602.61 324753.10 00:23:57.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1472.10 368.02 88323.55 53011.95 441350.34 00:23:57.820 ======================================================== 00:23:57.820 Total : 2899.76 724.94 91008.82 53011.95 441350.34 00:23:57.820 00:23:57.820 14:06:36 -- host/perf.sh@66 -- # sync 00:23:57.820 14:06:36 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:57.820 14:06:37 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:57.820 14:06:37 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:57.820 14:06:37 -- host/perf.sh@114 -- # nvmftestfini 00:23:57.820 14:06:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:57.820 14:06:37 -- nvmf/common.sh@117 -- # sync 00:23:57.820 14:06:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:57.820 14:06:37 -- nvmf/common.sh@120 -- # set +e 00:23:57.820 14:06:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:57.820 14:06:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:57.820 rmmod nvme_tcp 00:23:57.820 rmmod nvme_fabrics 00:23:57.820 rmmod nvme_keyring 00:23:57.820 14:06:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:57.820 14:06:37 -- nvmf/common.sh@124 -- # set -e 00:23:57.820 14:06:37 -- nvmf/common.sh@125 -- # return 0 00:23:57.820 14:06:37 -- nvmf/common.sh@478 -- # '[' -n 82779 ']' 00:23:57.820 14:06:37 -- nvmf/common.sh@479 -- # killprocess 82779 00:23:57.820 14:06:37 -- common/autotest_common.sh@936 -- # '[' -z 82779 ']' 00:23:57.820 14:06:37 -- common/autotest_common.sh@940 -- # kill -0 82779 00:23:57.820 14:06:37 -- common/autotest_common.sh@941 -- # uname 00:23:57.820 14:06:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:57.820 14:06:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82779 00:23:57.820 killing process with pid 82779 00:23:57.820 14:06:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:57.820 14:06:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:57.820 14:06:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82779' 00:23:57.820 14:06:37 -- common/autotest_common.sh@955 -- # kill 82779 00:23:57.820 14:06:37 -- common/autotest_common.sh@960 -- # wait 82779 00:23:59.723 14:06:38 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:59.723 14:06:38 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:59.723 14:06:38 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:59.723 14:06:38 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:59.723 14:06:38 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:59.723 14:06:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.723 14:06:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.723 14:06:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.723 14:06:38 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:59.723 ************************************ 00:23:59.723 END TEST nvmf_perf 00:23:59.723 ************************************ 00:23:59.723 00:23:59.723 real 0m16.289s 00:23:59.723 user 0m57.097s 00:23:59.723 sys 0m4.371s 00:23:59.723 14:06:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:59.723 14:06:38 -- common/autotest_common.sh@10 -- # set +x 00:23:59.723 14:06:39 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:59.723 14:06:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:59.724 14:06:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:59.724 14:06:39 -- common/autotest_common.sh@10 -- # set +x 00:23:59.724 ************************************ 00:23:59.724 START TEST nvmf_fio_host 00:23:59.724 ************************************ 00:23:59.724 14:06:39 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:59.724 * Looking for test storage... 00:23:59.724 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:59.724 14:06:39 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:59.724 14:06:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.724 14:06:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.724 14:06:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.724 14:06:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.724 14:06:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.724 14:06:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.724 14:06:39 -- paths/export.sh@5 -- # export PATH 00:23:59.724 14:06:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.724 14:06:39 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:59.724 14:06:39 -- nvmf/common.sh@7 -- # uname -s 00:23:59.724 14:06:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.724 14:06:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.724 14:06:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.724 14:06:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.724 14:06:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.724 14:06:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.724 14:06:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.724 14:06:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.724 14:06:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.724 14:06:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.724 14:06:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:23:59.724 14:06:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:23:59.724 14:06:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.724 14:06:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.724 14:06:39 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:59.724 14:06:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.724 14:06:39 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:59.724 14:06:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.724 14:06:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.724 14:06:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.724 14:06:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.724 14:06:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.724 14:06:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.724 14:06:39 -- paths/export.sh@5 -- # export PATH 00:23:59.724 14:06:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.724 14:06:39 -- nvmf/common.sh@47 -- # : 0 00:23:59.724 14:06:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:59.724 14:06:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:59.724 14:06:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.724 14:06:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.724 14:06:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.724 14:06:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:59.724 14:06:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:59.724 14:06:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:59.724 14:06:39 -- host/fio.sh@12 -- # nvmftestinit 00:23:59.724 14:06:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:59.724 14:06:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.724 14:06:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:59.724 14:06:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:59.724 14:06:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:59.724 14:06:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.724 14:06:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.724 14:06:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.724 14:06:39 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:59.724 14:06:39 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:59.724 14:06:39 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:59.724 14:06:39 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:59.724 14:06:39 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:59.724 14:06:39 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:59.724 14:06:39 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.724 14:06:39 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.724 14:06:39 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:59.724 14:06:39 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:59.724 14:06:39 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:59.724 14:06:39 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:59.724 14:06:39 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:59.724 14:06:39 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.724 14:06:39 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:59.724 14:06:39 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:59.724 14:06:39 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:59.724 14:06:39 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:59.724 14:06:39 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:59.724 14:06:39 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:59.724 Cannot find device "nvmf_tgt_br" 00:23:59.724 14:06:39 -- nvmf/common.sh@155 -- # true 00:23:59.724 14:06:39 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:59.724 Cannot find device "nvmf_tgt_br2" 00:23:59.724 14:06:39 -- nvmf/common.sh@156 -- # true 00:23:59.724 14:06:39 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:59.724 14:06:39 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:59.724 Cannot find device "nvmf_tgt_br" 00:23:59.724 14:06:39 -- nvmf/common.sh@158 -- # true 00:23:59.724 14:06:39 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:59.982 Cannot find device "nvmf_tgt_br2" 00:23:59.982 14:06:39 -- nvmf/common.sh@159 -- # true 00:23:59.982 14:06:39 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:59.982 14:06:39 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:59.982 14:06:39 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:59.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:59.982 14:06:39 -- nvmf/common.sh@162 -- # true 00:23:59.982 14:06:39 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:59.982 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:59.982 14:06:39 -- nvmf/common.sh@163 -- # true 00:23:59.982 14:06:39 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:59.982 14:06:39 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:59.982 14:06:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:59.983 14:06:39 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:59.983 14:06:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:59.983 14:06:39 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:59.983 14:06:39 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:59.983 14:06:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:59.983 14:06:39 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:59.983 14:06:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:59.983 14:06:39 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:59.983 14:06:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:59.983 14:06:39 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:59.983 14:06:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:59.983 14:06:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:59.983 14:06:39 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:59.983 14:06:39 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:59.983 14:06:39 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:59.983 14:06:39 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:00.241 14:06:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:00.241 14:06:39 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:00.241 14:06:39 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:00.241 14:06:39 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:00.241 14:06:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:00.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:24:00.241 00:24:00.241 --- 10.0.0.2 ping statistics --- 00:24:00.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.241 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:24:00.241 14:06:39 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:00.241 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:00.241 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:24:00.241 00:24:00.241 --- 10.0.0.3 ping statistics --- 00:24:00.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.241 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:24:00.241 14:06:39 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:00.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:24:00.241 00:24:00.241 --- 10.0.0.1 ping statistics --- 00:24:00.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.241 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:24:00.241 14:06:39 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.241 14:06:39 -- nvmf/common.sh@422 -- # return 0 00:24:00.241 14:06:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:00.241 14:06:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.241 14:06:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:00.241 14:06:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:00.241 14:06:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.241 14:06:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:00.241 14:06:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:00.241 14:06:39 -- host/fio.sh@14 -- # [[ y != y ]] 00:24:00.241 14:06:39 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:24:00.241 14:06:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:00.241 14:06:39 -- common/autotest_common.sh@10 -- # set +x 00:24:00.241 14:06:39 -- host/fio.sh@22 -- # nvmfpid=83288 00:24:00.241 14:06:39 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:00.241 14:06:39 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:00.241 14:06:39 -- host/fio.sh@26 -- # waitforlisten 83288 00:24:00.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.241 14:06:39 -- common/autotest_common.sh@817 -- # '[' -z 83288 ']' 00:24:00.241 14:06:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.241 14:06:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:00.241 14:06:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.241 14:06:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:00.241 14:06:39 -- common/autotest_common.sh@10 -- # set +x 00:24:00.241 [2024-04-26 14:06:39.851875] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:00.241 [2024-04-26 14:06:39.851996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.499 [2024-04-26 14:06:40.026034] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:00.756 [2024-04-26 14:06:40.266872] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.756 [2024-04-26 14:06:40.266925] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.756 [2024-04-26 14:06:40.266941] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.756 [2024-04-26 14:06:40.266952] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.756 [2024-04-26 14:06:40.266964] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.756 [2024-04-26 14:06:40.267125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.756 [2024-04-26 14:06:40.267424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.756 [2024-04-26 14:06:40.267752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.756 [2024-04-26 14:06:40.267789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.014 14:06:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:01.014 14:06:40 -- common/autotest_common.sh@850 -- # return 0 00:24:01.014 14:06:40 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:01.014 14:06:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.014 14:06:40 -- common/autotest_common.sh@10 -- # set +x 00:24:01.272 [2024-04-26 14:06:40.695588] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.272 14:06:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.272 14:06:40 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:24:01.272 14:06:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:01.272 14:06:40 -- common/autotest_common.sh@10 -- # set +x 00:24:01.272 14:06:40 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:01.272 14:06:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.272 14:06:40 -- common/autotest_common.sh@10 -- # set +x 00:24:01.272 Malloc1 00:24:01.272 14:06:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.272 14:06:40 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.272 14:06:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.272 14:06:40 -- common/autotest_common.sh@10 -- # set +x 00:24:01.272 14:06:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.272 14:06:40 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:01.272 14:06:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.272 14:06:40 -- common/autotest_common.sh@10 -- # set +x 00:24:01.272 14:06:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.272 14:06:40 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:01.272 14:06:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.272 14:06:40 -- common/autotest_common.sh@10 -- # set +x 00:24:01.272 [2024-04-26 14:06:40.892738] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.272 14:06:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.272 14:06:40 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:01.272 14:06:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.272 14:06:40 -- common/autotest_common.sh@10 -- # set +x 00:24:01.272 14:06:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.272 14:06:40 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:24:01.272 14:06:40 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:01.272 14:06:40 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:01.272 14:06:40 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:24:01.272 14:06:40 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:01.272 14:06:40 -- common/autotest_common.sh@1325 -- # local sanitizers 00:24:01.272 14:06:40 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:01.272 14:06:40 -- common/autotest_common.sh@1327 -- # shift 00:24:01.272 14:06:40 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:24:01.272 14:06:40 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:01.272 14:06:40 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:01.272 14:06:40 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:01.272 14:06:40 -- common/autotest_common.sh@1331 -- # grep libasan 00:24:01.530 14:06:40 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:01.530 14:06:40 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:01.530 14:06:40 -- common/autotest_common.sh@1333 -- # break 00:24:01.530 14:06:40 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:01.530 14:06:40 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:01.530 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:01.530 fio-3.35 00:24:01.530 Starting 1 thread 00:24:04.060 00:24:04.060 test: (groupid=0, jobs=1): err= 0: pid=83359: Fri Apr 26 14:06:43 2024 00:24:04.060 read: IOPS=8805, BW=34.4MiB/s (36.1MB/s)(69.0MiB/2007msec) 00:24:04.060 slat (nsec): min=1790, max=282431, avg=2285.00, stdev=2887.61 00:24:04.060 clat (usec): min=3114, max=22775, avg=7601.54, stdev=1270.37 00:24:04.060 lat (usec): min=3159, max=22777, avg=7603.82, stdev=1270.44 00:24:04.060 clat percentiles (usec): 00:24:04.060 | 1.00th=[ 6063], 5.00th=[ 6652], 10.00th=[ 6783], 20.00th=[ 7046], 00:24:04.060 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7570], 00:24:04.060 | 70.00th=[ 7701], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8586], 00:24:04.060 | 99.00th=[13566], 99.50th=[17171], 99.90th=[21103], 99.95th=[22414], 00:24:04.060 | 99.99th=[22676] 00:24:04.060 bw ( KiB/s): min=34816, max=35888, per=99.99%, avg=35218.00, stdev=481.35, samples=4 00:24:04.060 iops : min= 8704, max= 8972, avg=8804.50, stdev=120.34, samples=4 00:24:04.060 write: IOPS=8816, BW=34.4MiB/s (36.1MB/s)(69.1MiB/2007msec); 0 zone resets 00:24:04.060 slat (nsec): min=1843, max=200073, avg=2372.17, stdev=2101.72 00:24:04.060 clat (usec): min=2250, max=20930, avg=6835.53, stdev=1029.82 00:24:04.060 lat (usec): min=2268, max=20933, avg=6837.90, stdev=1029.86 00:24:04.060 clat percentiles (usec): 00:24:04.060 | 1.00th=[ 5407], 5.00th=[ 5997], 10.00th=[ 6128], 20.00th=[ 6390], 00:24:04.060 | 30.00th=[ 6521], 40.00th=[ 6652], 50.00th=[ 6718], 60.00th=[ 6849], 00:24:04.060 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7635], 00:24:04.060 | 99.00th=[11731], 99.50th=[13304], 99.90th=[19268], 99.95th=[20055], 00:24:04.060 | 99.99th=[20841] 00:24:04.060 bw ( KiB/s): min=34144, max=36072, per=99.99%, avg=35260.00, stdev=824.80, samples=4 00:24:04.060 iops : min= 8534, max= 9018, avg=8815.00, stdev=207.40, samples=4 00:24:04.060 lat (msec) : 4=0.07%, 10=97.87%, 20=1.93%, 50=0.12% 00:24:04.060 cpu : usr=66.60%, sys=25.07%, ctx=96, majf=0, minf=1536 00:24:04.060 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:04.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:04.060 issued rwts: total=17672,17694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.060 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:04.060 00:24:04.060 Run status group 0 (all jobs): 00:24:04.060 READ: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.0MiB (72.4MB), run=2007-2007msec 00:24:04.060 WRITE: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.1MiB (72.5MB), run=2007-2007msec 00:24:04.060 ----------------------------------------------------- 00:24:04.060 Suppressions used: 00:24:04.060 count bytes template 00:24:04.060 1 57 /usr/src/fio/parse.c 00:24:04.060 1 8 libtcmalloc_minimal.so 00:24:04.060 ----------------------------------------------------- 00:24:04.060 00:24:04.060 14:06:43 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:04.060 14:06:43 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:04.060 14:06:43 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:24:04.060 14:06:43 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:04.060 14:06:43 -- common/autotest_common.sh@1325 -- # local sanitizers 00:24:04.060 14:06:43 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:04.060 14:06:43 -- common/autotest_common.sh@1327 -- # shift 00:24:04.060 14:06:43 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:24:04.060 14:06:43 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:24:04.060 14:06:43 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:24:04.060 14:06:43 -- common/autotest_common.sh@1331 -- # grep libasan 00:24:04.060 14:06:43 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:24:04.060 14:06:43 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:24:04.060 14:06:43 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:24:04.061 14:06:43 -- common/autotest_common.sh@1333 -- # break 00:24:04.061 14:06:43 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:24:04.061 14:06:43 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:04.319 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:04.319 fio-3.35 00:24:04.319 Starting 1 thread 00:24:06.850 00:24:06.850 test: (groupid=0, jobs=1): err= 0: pid=83400: Fri Apr 26 14:06:46 2024 00:24:06.850 read: IOPS=7982, BW=125MiB/s (131MB/s)(251MiB/2009msec) 00:24:06.850 slat (usec): min=2, max=105, avg= 3.42, stdev= 1.74 00:24:06.850 clat (usec): min=3083, max=19559, avg=9370.28, stdev=2383.49 00:24:06.850 lat (usec): min=3086, max=19562, avg=9373.70, stdev=2383.69 00:24:06.850 clat percentiles (usec): 00:24:06.850 | 1.00th=[ 4948], 5.00th=[ 5800], 10.00th=[ 6521], 20.00th=[ 7439], 00:24:06.850 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9765], 00:24:06.850 | 70.00th=[10290], 80.00th=[11076], 90.00th=[12387], 95.00th=[13829], 00:24:06.850 | 99.00th=[16909], 99.50th=[17957], 99.90th=[19006], 99.95th=[19268], 00:24:06.850 | 99.99th=[19530] 00:24:06.850 bw ( KiB/s): min=56768, max=72128, per=50.80%, avg=64872.00, stdev=6336.83, samples=4 00:24:06.850 iops : min= 3548, max= 4508, avg=4054.50, stdev=396.05, samples=4 00:24:06.850 write: IOPS=4554, BW=71.2MiB/s (74.6MB/s)(133MiB/1862msec); 0 zone resets 00:24:06.850 slat (usec): min=29, max=229, avg=32.43, stdev= 6.76 00:24:06.850 clat (usec): min=3874, max=21623, avg=11722.04, stdev=2246.78 00:24:06.850 lat (usec): min=3919, max=21653, avg=11754.47, stdev=2247.17 00:24:06.850 clat percentiles (usec): 00:24:06.850 | 1.00th=[ 7898], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9896], 00:24:06.850 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[11863], 00:24:06.850 | 70.00th=[12649], 80.00th=[13435], 90.00th=[14746], 95.00th=[15926], 00:24:06.850 | 99.00th=[18744], 99.50th=[19792], 99.90th=[20841], 99.95th=[21103], 00:24:06.850 | 99.99th=[21627] 00:24:06.850 bw ( KiB/s): min=59680, max=74336, per=92.43%, avg=67360.00, stdev=6048.08, samples=4 00:24:06.850 iops : min= 3730, max= 4646, avg=4210.00, stdev=378.01, samples=4 00:24:06.850 lat (msec) : 4=0.22%, 10=49.62%, 20=50.01%, 50=0.15% 00:24:06.850 cpu : usr=71.41%, sys=20.12%, ctx=42, majf=0, minf=2215 00:24:06.850 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:24:06.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:06.850 issued rwts: total=16036,8481,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:06.850 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:06.850 00:24:06.850 Run status group 0 (all jobs): 00:24:06.850 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=251MiB (263MB), run=2009-2009msec 00:24:06.850 WRITE: bw=71.2MiB/s (74.6MB/s), 71.2MiB/s-71.2MiB/s (74.6MB/s-74.6MB/s), io=133MiB (139MB), run=1862-1862msec 00:24:06.850 ----------------------------------------------------- 00:24:06.850 Suppressions used: 00:24:06.850 count bytes template 00:24:06.850 1 57 /usr/src/fio/parse.c 00:24:06.851 592 56832 /usr/src/fio/iolog.c 00:24:06.851 1 8 libtcmalloc_minimal.so 00:24:06.851 ----------------------------------------------------- 00:24:06.851 00:24:06.851 14:06:46 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:06.851 14:06:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.851 14:06:46 -- common/autotest_common.sh@10 -- # set +x 00:24:06.851 14:06:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.851 14:06:46 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:24:06.851 14:06:46 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:24:06.851 14:06:46 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:24:06.851 14:06:46 -- host/fio.sh@84 -- # nvmftestfini 00:24:06.851 14:06:46 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:06.851 14:06:46 -- nvmf/common.sh@117 -- # sync 00:24:06.851 14:06:46 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:06.851 14:06:46 -- nvmf/common.sh@120 -- # set +e 00:24:06.851 14:06:46 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:06.851 14:06:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:06.851 rmmod nvme_tcp 00:24:06.851 rmmod nvme_fabrics 00:24:06.851 rmmod nvme_keyring 00:24:06.851 14:06:46 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:07.109 14:06:46 -- nvmf/common.sh@124 -- # set -e 00:24:07.109 14:06:46 -- nvmf/common.sh@125 -- # return 0 00:24:07.109 14:06:46 -- nvmf/common.sh@478 -- # '[' -n 83288 ']' 00:24:07.109 14:06:46 -- nvmf/common.sh@479 -- # killprocess 83288 00:24:07.109 14:06:46 -- common/autotest_common.sh@936 -- # '[' -z 83288 ']' 00:24:07.109 14:06:46 -- common/autotest_common.sh@940 -- # kill -0 83288 00:24:07.109 14:06:46 -- common/autotest_common.sh@941 -- # uname 00:24:07.109 14:06:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:07.109 14:06:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83288 00:24:07.109 killing process with pid 83288 00:24:07.109 14:06:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:07.109 14:06:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:07.109 14:06:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83288' 00:24:07.109 14:06:46 -- common/autotest_common.sh@955 -- # kill 83288 00:24:07.109 14:06:46 -- common/autotest_common.sh@960 -- # wait 83288 00:24:08.485 14:06:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:08.485 14:06:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:08.485 14:06:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:08.485 14:06:48 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:08.485 14:06:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:08.485 14:06:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.485 14:06:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.485 14:06:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.485 14:06:48 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:08.485 00:24:08.485 real 0m8.968s 00:24:08.485 user 0m32.699s 00:24:08.485 sys 0m2.568s 00:24:08.485 14:06:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:08.485 ************************************ 00:24:08.485 END TEST nvmf_fio_host 00:24:08.485 ************************************ 00:24:08.485 14:06:48 -- common/autotest_common.sh@10 -- # set +x 00:24:08.744 14:06:48 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:08.744 14:06:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:08.744 14:06:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:08.744 14:06:48 -- common/autotest_common.sh@10 -- # set +x 00:24:08.744 ************************************ 00:24:08.744 START TEST nvmf_failover 00:24:08.744 ************************************ 00:24:08.744 14:06:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:08.744 * Looking for test storage... 00:24:08.744 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:08.744 14:06:48 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:08.744 14:06:48 -- nvmf/common.sh@7 -- # uname -s 00:24:08.744 14:06:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:08.744 14:06:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:08.744 14:06:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:08.744 14:06:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:08.744 14:06:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:08.744 14:06:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:08.744 14:06:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:08.744 14:06:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:08.744 14:06:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:08.744 14:06:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.004 14:06:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:24:09.004 14:06:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:24:09.004 14:06:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.004 14:06:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.004 14:06:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:09.004 14:06:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.004 14:06:48 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:09.004 14:06:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.004 14:06:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.004 14:06:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.004 14:06:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.004 14:06:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.004 14:06:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.004 14:06:48 -- paths/export.sh@5 -- # export PATH 00:24:09.004 14:06:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.004 14:06:48 -- nvmf/common.sh@47 -- # : 0 00:24:09.004 14:06:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:09.004 14:06:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:09.004 14:06:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:09.004 14:06:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.004 14:06:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.004 14:06:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:09.004 14:06:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:09.004 14:06:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:09.004 14:06:48 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:09.004 14:06:48 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:09.004 14:06:48 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:09.004 14:06:48 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:09.004 14:06:48 -- host/failover.sh@18 -- # nvmftestinit 00:24:09.004 14:06:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:09.004 14:06:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.004 14:06:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:09.004 14:06:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:09.004 14:06:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:09.004 14:06:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.004 14:06:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:09.004 14:06:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.004 14:06:48 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:09.004 14:06:48 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:09.004 14:06:48 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:09.004 14:06:48 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:09.004 14:06:48 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:09.004 14:06:48 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:09.004 14:06:48 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.004 14:06:48 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.004 14:06:48 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:09.004 14:06:48 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:09.004 14:06:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:09.004 14:06:48 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:09.004 14:06:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:09.004 14:06:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.004 14:06:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:09.004 14:06:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:09.004 14:06:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:09.004 14:06:48 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:09.004 14:06:48 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:09.004 14:06:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:09.004 Cannot find device "nvmf_tgt_br" 00:24:09.005 14:06:48 -- nvmf/common.sh@155 -- # true 00:24:09.005 14:06:48 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:09.005 Cannot find device "nvmf_tgt_br2" 00:24:09.005 14:06:48 -- nvmf/common.sh@156 -- # true 00:24:09.005 14:06:48 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:09.005 14:06:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:09.005 Cannot find device "nvmf_tgt_br" 00:24:09.005 14:06:48 -- nvmf/common.sh@158 -- # true 00:24:09.005 14:06:48 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:09.005 Cannot find device "nvmf_tgt_br2" 00:24:09.005 14:06:48 -- nvmf/common.sh@159 -- # true 00:24:09.005 14:06:48 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:09.005 14:06:48 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:09.005 14:06:48 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:09.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:09.005 14:06:48 -- nvmf/common.sh@162 -- # true 00:24:09.005 14:06:48 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:09.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:09.005 14:06:48 -- nvmf/common.sh@163 -- # true 00:24:09.005 14:06:48 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:09.005 14:06:48 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:09.005 14:06:48 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:09.005 14:06:48 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:09.265 14:06:48 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:09.265 14:06:48 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:09.265 14:06:48 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:09.265 14:06:48 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:09.265 14:06:48 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:09.265 14:06:48 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:09.265 14:06:48 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:09.265 14:06:48 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:09.265 14:06:48 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:09.265 14:06:48 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:09.265 14:06:48 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:09.265 14:06:48 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:09.265 14:06:48 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:09.265 14:06:48 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:09.265 14:06:48 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:09.265 14:06:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:09.265 14:06:48 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:09.265 14:06:48 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:09.265 14:06:48 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:09.265 14:06:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:09.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:24:09.265 00:24:09.265 --- 10.0.0.2 ping statistics --- 00:24:09.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.265 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:24:09.265 14:06:48 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:09.265 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:09.265 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:24:09.265 00:24:09.265 --- 10.0.0.3 ping statistics --- 00:24:09.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.265 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:24:09.265 14:06:48 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:09.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:24:09.265 00:24:09.265 --- 10.0.0.1 ping statistics --- 00:24:09.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.265 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:24:09.265 14:06:48 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.265 14:06:48 -- nvmf/common.sh@422 -- # return 0 00:24:09.265 14:06:48 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:09.265 14:06:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.265 14:06:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:09.265 14:06:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:09.265 14:06:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.265 14:06:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:09.265 14:06:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:09.265 14:06:48 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:09.265 14:06:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:09.265 14:06:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:09.265 14:06:48 -- common/autotest_common.sh@10 -- # set +x 00:24:09.265 14:06:48 -- nvmf/common.sh@470 -- # nvmfpid=83631 00:24:09.265 14:06:48 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:09.265 14:06:48 -- nvmf/common.sh@471 -- # waitforlisten 83631 00:24:09.265 14:06:48 -- common/autotest_common.sh@817 -- # '[' -z 83631 ']' 00:24:09.265 14:06:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.265 14:06:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:09.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.265 14:06:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.265 14:06:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:09.265 14:06:48 -- common/autotest_common.sh@10 -- # set +x 00:24:09.524 [2024-04-26 14:06:48.986241] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:09.524 [2024-04-26 14:06:48.986357] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.524 [2024-04-26 14:06:49.157822] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:09.783 [2024-04-26 14:06:49.399018] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.783 [2024-04-26 14:06:49.399074] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.783 [2024-04-26 14:06:49.399090] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.783 [2024-04-26 14:06:49.399111] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.784 [2024-04-26 14:06:49.399124] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.784 [2024-04-26 14:06:49.399457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.784 [2024-04-26 14:06:49.399414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.784 [2024-04-26 14:06:49.400078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.351 14:06:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:10.351 14:06:49 -- common/autotest_common.sh@850 -- # return 0 00:24:10.351 14:06:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:10.351 14:06:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:10.351 14:06:49 -- common/autotest_common.sh@10 -- # set +x 00:24:10.351 14:06:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.351 14:06:49 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:10.610 [2024-04-26 14:06:50.027974] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.610 14:06:50 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:10.901 Malloc0 00:24:10.901 14:06:50 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:10.901 14:06:50 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:11.161 14:06:50 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.420 [2024-04-26 14:06:50.882409] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.420 14:06:50 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:11.420 [2024-04-26 14:06:51.066393] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:11.420 14:06:51 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:11.680 [2024-04-26 14:06:51.266407] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:11.680 14:06:51 -- host/failover.sh@31 -- # bdevperf_pid=83740 00:24:11.680 14:06:51 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:11.680 14:06:51 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:11.680 14:06:51 -- host/failover.sh@34 -- # waitforlisten 83740 /var/tmp/bdevperf.sock 00:24:11.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.680 14:06:51 -- common/autotest_common.sh@817 -- # '[' -z 83740 ']' 00:24:11.680 14:06:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.680 14:06:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:11.680 14:06:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.680 14:06:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:11.680 14:06:51 -- common/autotest_common.sh@10 -- # set +x 00:24:12.614 14:06:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:12.614 14:06:52 -- common/autotest_common.sh@850 -- # return 0 00:24:12.614 14:06:52 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:12.873 NVMe0n1 00:24:12.873 14:06:52 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:13.133 00:24:13.133 14:06:52 -- host/failover.sh@39 -- # run_test_pid=83788 00:24:13.133 14:06:52 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:13.133 14:06:52 -- host/failover.sh@41 -- # sleep 1 00:24:14.070 14:06:53 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.329 [2024-04-26 14:06:53.924422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.329 [2024-04-26 14:06:53.924481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.329 [2024-04-26 14:06:53.924495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924529] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924681] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924720] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924730] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924780] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924866] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924956] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.924996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925006] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925016] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925027] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925037] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925077] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925097] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925106] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925192] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925322] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925331] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925341] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 [2024-04-26 14:06:53.925351] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:24:14.330 14:06:53 -- host/failover.sh@45 -- # sleep 3 00:24:17.680 14:06:56 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:17.680 00:24:17.680 14:06:57 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:17.968 [2024-04-26 14:06:57.411150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.968 [2024-04-26 14:06:57.411217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.968 [2024-04-26 14:06:57.411231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.968 [2024-04-26 14:06:57.411242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.968 [2024-04-26 14:06:57.411253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.968 [2024-04-26 14:06:57.411263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.968 [2024-04-26 14:06:57.411274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.968 [2024-04-26 14:06:57.411285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.968 [2024-04-26 14:06:57.411296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.968 [2024-04-26 14:06:57.411305] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.968 [2024-04-26 14:06:57.411316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.968 [2024-04-26 14:06:57.411326] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.968 [2024-04-26 14:06:57.411337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411388] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411410] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411451] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411528] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411549] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411590] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411611] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411642] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411671] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411681] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411702] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411713] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411723] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411773] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411784] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411805] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411858] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411868] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411899] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411939] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 [2024-04-26 14:06:57.411948] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:24:17.969 14:06:57 -- host/failover.sh@50 -- # sleep 3 00:24:21.265 14:07:00 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:21.265 [2024-04-26 14:07:00.628294] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.265 14:07:00 -- host/failover.sh@55 -- # sleep 1 00:24:22.202 14:07:01 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:22.202 [2024-04-26 14:07:01.837459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:22.202 [2024-04-26 14:07:01.837518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:22.202 [2024-04-26 14:07:01.837531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:22.202 [2024-04-26 14:07:01.837542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:22.202 [2024-04-26 14:07:01.837553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:22.202 [2024-04-26 14:07:01.837563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:22.202 [2024-04-26 14:07:01.837573] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:22.202 [2024-04-26 14:07:01.837583] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:24:22.202 14:07:01 -- host/failover.sh@59 -- # wait 83788 00:24:28.777 0 00:24:28.777 14:07:07 -- host/failover.sh@61 -- # killprocess 83740 00:24:28.777 14:07:07 -- common/autotest_common.sh@936 -- # '[' -z 83740 ']' 00:24:28.777 14:07:07 -- common/autotest_common.sh@940 -- # kill -0 83740 00:24:28.777 14:07:07 -- common/autotest_common.sh@941 -- # uname 00:24:28.777 14:07:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:28.777 14:07:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83740 00:24:28.777 killing process with pid 83740 00:24:28.777 14:07:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:28.777 14:07:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:28.777 14:07:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83740' 00:24:28.777 14:07:07 -- common/autotest_common.sh@955 -- # kill 83740 00:24:28.777 14:07:07 -- common/autotest_common.sh@960 -- # wait 83740 00:24:29.721 14:07:09 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:29.721 [2024-04-26 14:06:51.377787] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:29.721 [2024-04-26 14:06:51.377924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83740 ] 00:24:29.721 [2024-04-26 14:06:51.562513] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.721 [2024-04-26 14:06:51.799297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.721 Running I/O for 15 seconds... 00:24:29.721 [2024-04-26 14:06:53.926678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.721 [2024-04-26 14:06:53.926731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.926761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.926778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.926796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.926811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.926829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.926844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.926862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.926877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.926894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.926909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.926927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.926942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.926958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.926973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.926990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.927005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.927022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.927037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.927054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.927069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.927107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.927123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.927140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.927181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.927200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.927215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.927232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.927247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.927264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.927279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.927296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.927311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.927328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.927343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.927359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.927374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.927391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.927406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.927423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.927438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.927454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.927469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.721 [2024-04-26 14:06:53.927486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.721 [2024-04-26 14:06:53.927501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.927518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.927539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.927557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.927572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.927589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.927604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.927621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.927636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.927653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.927668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.927685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.927700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.927717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.722 [2024-04-26 14:06:53.927732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.927749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.722 [2024-04-26 14:06:53.927764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.927781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.722 [2024-04-26 14:06:53.927796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.927813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.722 [2024-04-26 14:06:53.927828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.927845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.722 [2024-04-26 14:06:53.927860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.927877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.722 [2024-04-26 14:06:53.927892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.927910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.722 [2024-04-26 14:06:53.927925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.927945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.722 [2024-04-26 14:06:53.927967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.927984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.722 [2024-04-26 14:06:53.928000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.928016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.722 [2024-04-26 14:06:53.928031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.928048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.722 [2024-04-26 14:06:53.928063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.928080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.722 [2024-04-26 14:06:53.928095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.928111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.722 [2024-04-26 14:06:53.928126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.928143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.928167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.928184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.928214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.928231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.928247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.928264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.928280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.928296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.928311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.928328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.928343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.928360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.928375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.928397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.928412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.928429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.928444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.928461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.928476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.928495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.928510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.928527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.928542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.928559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.722 [2024-04-26 14:06:53.928574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.722 [2024-04-26 14:06:53.928590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.928605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.928622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.928637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.928653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.928668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.928685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.928700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.928717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.928731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.928748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.928763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.928780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.928800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.928817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.928832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.928849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.928864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.928881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.928896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.928912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.928927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.928943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.928958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.928975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.928989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.723 [2024-04-26 14:06:53.929614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.723 [2024-04-26 14:06:53.929636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.929651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.929668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.929683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.929699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.929714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.929731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.929746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.929762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.929777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.929794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.929809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.929825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.929840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.929857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.929872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.929888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.929903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.929920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.929935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.929951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.929975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.929992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.930007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.930024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.930044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.930061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.930076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.930093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.930108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.930125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.930139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.930164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.930180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.930197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.930211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.930228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.930243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.930260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.724 [2024-04-26 14:06:53.930275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.930312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.724 [2024-04-26 14:06:53.930340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87776 len:8 PRP1 0x0 PRP2 0x0 00:24:29.724 [2024-04-26 14:06:53.930363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.930383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.724 [2024-04-26 14:06:53.930396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.724 [2024-04-26 14:06:53.930409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87784 len:8 PRP1 0x0 PRP2 0x0 00:24:29.724 [2024-04-26 14:06:53.930425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.930440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.724 [2024-04-26 14:06:53.930452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.724 [2024-04-26 14:06:53.930463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87792 len:8 PRP1 0x0 PRP2 0x0 00:24:29.724 [2024-04-26 14:06:53.930478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.930493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.724 [2024-04-26 14:06:53.930510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.724 [2024-04-26 14:06:53.930522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87800 len:8 PRP1 0x0 PRP2 0x0 00:24:29.724 [2024-04-26 14:06:53.930537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.930552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.724 [2024-04-26 14:06:53.930563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.724 [2024-04-26 14:06:53.930575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87808 len:8 PRP1 0x0 PRP2 0x0 00:24:29.724 [2024-04-26 14:06:53.930590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.930605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.724 [2024-04-26 14:06:53.930615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.724 [2024-04-26 14:06:53.930628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87816 len:8 PRP1 0x0 PRP2 0x0 00:24:29.724 [2024-04-26 14:06:53.930642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.724 [2024-04-26 14:06:53.930657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.724 [2024-04-26 14:06:53.930668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.724 [2024-04-26 14:06:53.930679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87824 len:8 PRP1 0x0 PRP2 0x0 00:24:29.725 [2024-04-26 14:06:53.930694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.930708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.725 [2024-04-26 14:06:53.930720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.725 [2024-04-26 14:06:53.930731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87832 len:8 PRP1 0x0 PRP2 0x0 00:24:29.725 [2024-04-26 14:06:53.930746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.930760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.725 [2024-04-26 14:06:53.930772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.725 [2024-04-26 14:06:53.930783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87840 len:8 PRP1 0x0 PRP2 0x0 00:24:29.725 [2024-04-26 14:06:53.930798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.930812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.725 [2024-04-26 14:06:53.930823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.725 [2024-04-26 14:06:53.930835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87848 len:8 PRP1 0x0 PRP2 0x0 00:24:29.725 [2024-04-26 14:06:53.930850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.930864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.725 [2024-04-26 14:06:53.930875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.725 [2024-04-26 14:06:53.930887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87856 len:8 PRP1 0x0 PRP2 0x0 00:24:29.725 [2024-04-26 14:06:53.930902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.930920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.725 [2024-04-26 14:06:53.930931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.725 [2024-04-26 14:06:53.930943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87864 len:8 PRP1 0x0 PRP2 0x0 00:24:29.725 [2024-04-26 14:06:53.930958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.930972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.725 [2024-04-26 14:06:53.930983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.725 [2024-04-26 14:06:53.930996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87872 len:8 PRP1 0x0 PRP2 0x0 00:24:29.725 [2024-04-26 14:06:53.931010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.931024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.725 [2024-04-26 14:06:53.931035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.725 [2024-04-26 14:06:53.931047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87880 len:8 PRP1 0x0 PRP2 0x0 00:24:29.725 [2024-04-26 14:06:53.931062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.931076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.725 [2024-04-26 14:06:53.931087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.725 [2024-04-26 14:06:53.931099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86976 len:8 PRP1 0x0 PRP2 0x0 00:24:29.725 [2024-04-26 14:06:53.931114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.931129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.725 [2024-04-26 14:06:53.931140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.725 [2024-04-26 14:06:53.931163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86984 len:8 PRP1 0x0 PRP2 0x0 00:24:29.725 [2024-04-26 14:06:53.931179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.931194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.725 [2024-04-26 14:06:53.931205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.725 [2024-04-26 14:06:53.931217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86992 len:8 PRP1 0x0 PRP2 0x0 00:24:29.725 [2024-04-26 14:06:53.931232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.931246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.725 [2024-04-26 14:06:53.931257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.725 [2024-04-26 14:06:53.931269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87000 len:8 PRP1 0x0 PRP2 0x0 00:24:29.725 [2024-04-26 14:06:53.931285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.931299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.725 [2024-04-26 14:06:53.931310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.725 [2024-04-26 14:06:53.931321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87008 len:8 PRP1 0x0 PRP2 0x0 00:24:29.725 [2024-04-26 14:06:53.931341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.931355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.725 [2024-04-26 14:06:53.931366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.725 [2024-04-26 14:06:53.931378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87016 len:8 PRP1 0x0 PRP2 0x0 00:24:29.725 [2024-04-26 14:06:53.931393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.931408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.725 [2024-04-26 14:06:53.931419] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.725 [2024-04-26 14:06:53.931431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87024 len:8 PRP1 0x0 PRP2 0x0 00:24:29.725 [2024-04-26 14:06:53.931445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.931680] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007240 was disconnected and freed. reset controller. 00:24:29.725 [2024-04-26 14:06:53.931702] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:29.725 [2024-04-26 14:06:53.931757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.725 [2024-04-26 14:06:53.931780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.931798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.725 [2024-04-26 14:06:53.931813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.931829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.725 [2024-04-26 14:06:53.931844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.931860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.725 [2024-04-26 14:06:53.931875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.725 [2024-04-26 14:06:53.931890] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.725 [2024-04-26 14:06:53.931936] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:24:29.725 [2024-04-26 14:06:53.934906] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.725 [2024-04-26 14:06:53.984717] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:29.726 [2024-04-26 14:06:57.412147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.412962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.412979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.413001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.413018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.413033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.413050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.413065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.413083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.726 [2024-04-26 14:06:57.413098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.726 [2024-04-26 14:06:57.413115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.413975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.413992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.414008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.414024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.414040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.414057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.414072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.414095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.414111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.414128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.414143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.414170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.414185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.414202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.414219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.727 [2024-04-26 14:06:57.414236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.727 [2024-04-26 14:06:57.414251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.728 [2024-04-26 14:06:57.414283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.728 [2024-04-26 14:06:57.414316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.728 [2024-04-26 14:06:57.414348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.728 [2024-04-26 14:06:57.414380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.728 [2024-04-26 14:06:57.414412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.728 [2024-04-26 14:06:57.414444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.414479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.414516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.414548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.414581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.414613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.414644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.414677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.414709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.414740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.414772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.414803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.414834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.414866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.414911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.414952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.414969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.414984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.415007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.415024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.415041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.415056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.415073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.415088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.415105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.415120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.415137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.415163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.415181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.415196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.415213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.415228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.415244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.415260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.415277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.415292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.728 [2024-04-26 14:06:57.415309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.728 [2024-04-26 14:06:57.415323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.415973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.415990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.416005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.416021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.729 [2024-04-26 14:06:57.416036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.416054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.729 [2024-04-26 14:06:57.416071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.416088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.729 [2024-04-26 14:06:57.416103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.416121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.729 [2024-04-26 14:06:57.416136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.416162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.729 [2024-04-26 14:06:57.416178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.416200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.729 [2024-04-26 14:06:57.416215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.416232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.729 [2024-04-26 14:06:57.416247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.416263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.729 [2024-04-26 14:06:57.416278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.416296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.729 [2024-04-26 14:06:57.416311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.416328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.729 [2024-04-26 14:06:57.416343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.416359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.729 [2024-04-26 14:06:57.416375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.416391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.729 [2024-04-26 14:06:57.416406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.729 [2024-04-26 14:06:57.416423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.729 [2024-04-26 14:06:57.416438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:06:57.416454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:06:57.416469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:06:57.416486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:06:57.416501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:06:57.416517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:06:57.416532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:06:57.416576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.730 [2024-04-26 14:06:57.416590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.730 [2024-04-26 14:06:57.416606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98048 len:8 PRP1 0x0 PRP2 0x0 00:24:29.730 [2024-04-26 14:06:57.416624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:06:57.416911] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000008040 was disconnected and freed. reset controller. 00:24:29.730 [2024-04-26 14:06:57.416934] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:29.730 [2024-04-26 14:06:57.416997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.730 [2024-04-26 14:06:57.417017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:06:57.417035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.730 [2024-04-26 14:06:57.417050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:06:57.417067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.730 [2024-04-26 14:06:57.417082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:06:57.417098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.730 [2024-04-26 14:06:57.417113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:06:57.417130] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.730 [2024-04-26 14:06:57.417206] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:24:29.730 [2024-04-26 14:06:57.420214] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.730 [2024-04-26 14:06:57.453832] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:29.730 [2024-04-26 14:07:01.837962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:07:01.838063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:07:01.838117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:07:01.838167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:07:01.838203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:07:01.838239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:07:01.838294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:07:01.838348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:07:01.838382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:07:01.838415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.730 [2024-04-26 14:07:01.838449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:07:01.838481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:07:01.838513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:07:01.838546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:07:01.838578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:07:01.838610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:07:01.838643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.730 [2024-04-26 14:07:01.838678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.730 [2024-04-26 14:07:01.838711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.730 [2024-04-26 14:07:01.838750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.730 [2024-04-26 14:07:01.838783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.730 [2024-04-26 14:07:01.838816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.730 [2024-04-26 14:07:01.838833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.838848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.838866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.838881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.838898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.838914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.838931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.838947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.838964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.838980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.838997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.731 [2024-04-26 14:07:01.839908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.731 [2024-04-26 14:07:01.839924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.839940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.839959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.839974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.839991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.840008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.840045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.840078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.840110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.840143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.840188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.840222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.840254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.840288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.840319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.840353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.840385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.840419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.840451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.840503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.840536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.840570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.732 [2024-04-26 14:07:01.840603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.732 [2024-04-26 14:07:01.840635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.732 [2024-04-26 14:07:01.840667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.732 [2024-04-26 14:07:01.840701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.732 [2024-04-26 14:07:01.840735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.732 [2024-04-26 14:07:01.840769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.732 [2024-04-26 14:07:01.840802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.732 [2024-04-26 14:07:01.840819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.732 [2024-04-26 14:07:01.840836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.840855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.840871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.840889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.840906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.840927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.840943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.840961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.840976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.840994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.733 [2024-04-26 14:07:01.841184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.733 [2024-04-26 14:07:01.841220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:56552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.733 [2024-04-26 14:07:01.841254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.733 [2024-04-26 14:07:01.841287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.733 [2024-04-26 14:07:01.841319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.733 [2024-04-26 14:07:01.841357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.733 [2024-04-26 14:07:01.841391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:29.733 [2024-04-26 14:07:01.841425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:55888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.733 [2024-04-26 14:07:01.841903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.733 [2024-04-26 14:07:01.841919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.841936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.734 [2024-04-26 14:07:01.841959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.841976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.734 [2024-04-26 14:07:01.841993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.842009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.734 [2024-04-26 14:07:01.842025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.842043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.734 [2024-04-26 14:07:01.842059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.842077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.734 [2024-04-26 14:07:01.842092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.842109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.734 [2024-04-26 14:07:01.842125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.842142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.734 [2024-04-26 14:07:01.842166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.842183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.734 [2024-04-26 14:07:01.842208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.842266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.734 [2024-04-26 14:07:01.842282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.842300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:56024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.734 [2024-04-26 14:07:01.842316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.842334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.734 [2024-04-26 14:07:01.842350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.842368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.734 [2024-04-26 14:07:01.842383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.842401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.734 [2024-04-26 14:07:01.842417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.842434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:56056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.734 [2024-04-26 14:07:01.842450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.842467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.734 [2024-04-26 14:07:01.842483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.842499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000009240 is same with the state(5) to be set 00:24:29.734 [2024-04-26 14:07:01.842521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:29.734 [2024-04-26 14:07:01.842535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:29.734 [2024-04-26 14:07:01.842548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56072 len:8 PRP1 0x0 PRP2 0x0 00:24:29.734 [2024-04-26 14:07:01.842575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.842827] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000009240 was disconnected and freed. reset controller. 00:24:29.734 [2024-04-26 14:07:01.842849] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:29.734 [2024-04-26 14:07:01.842918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.734 [2024-04-26 14:07:01.842940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.842959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.734 [2024-04-26 14:07:01.842974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.842992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.734 [2024-04-26 14:07:01.843029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.843046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:29.734 [2024-04-26 14:07:01.843062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:29.734 [2024-04-26 14:07:01.843077] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.734 [2024-04-26 14:07:01.843143] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:24:29.734 [2024-04-26 14:07:01.846201] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.734 [2024-04-26 14:07:01.883725] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:29.734 00:24:29.734 Latency(us) 00:24:29.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.734 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:29.734 Verification LBA range: start 0x0 length 0x4000 00:24:29.734 NVMe0n1 : 15.01 9317.17 36.40 274.33 0.00 13319.61 509.94 23161.32 00:24:29.734 =================================================================================================================== 00:24:29.734 Total : 9317.17 36.40 274.33 0.00 13319.61 509.94 23161.32 00:24:29.734 Received shutdown signal, test time was about 15.000000 seconds 00:24:29.734 00:24:29.734 Latency(us) 00:24:29.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.734 =================================================================================================================== 00:24:29.734 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:29.734 14:07:09 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:29.734 14:07:09 -- host/failover.sh@65 -- # count=3 00:24:29.734 14:07:09 -- host/failover.sh@67 -- # (( count != 3 )) 00:24:29.734 14:07:09 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:29.734 14:07:09 -- host/failover.sh@73 -- # bdevperf_pid=84008 00:24:29.734 14:07:09 -- host/failover.sh@75 -- # waitforlisten 84008 /var/tmp/bdevperf.sock 00:24:29.734 14:07:09 -- common/autotest_common.sh@817 -- # '[' -z 84008 ']' 00:24:29.734 14:07:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:29.734 14:07:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:29.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:29.734 14:07:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:29.734 14:07:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:29.734 14:07:09 -- common/autotest_common.sh@10 -- # set +x 00:24:30.672 14:07:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:30.672 14:07:10 -- common/autotest_common.sh@850 -- # return 0 00:24:30.672 14:07:10 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:30.930 [2024-04-26 14:07:10.426777] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:30.930 14:07:10 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:31.188 [2024-04-26 14:07:10.638742] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:31.188 14:07:10 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:31.445 NVMe0n1 00:24:31.445 14:07:10 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:31.702 00:24:31.702 14:07:11 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:32.000 00:24:32.000 14:07:11 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:32.000 14:07:11 -- host/failover.sh@82 -- # grep -q NVMe0 00:24:32.000 14:07:11 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:32.273 14:07:11 -- host/failover.sh@87 -- # sleep 3 00:24:35.556 14:07:14 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:35.556 14:07:14 -- host/failover.sh@88 -- # grep -q NVMe0 00:24:35.556 14:07:15 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:35.556 14:07:15 -- host/failover.sh@90 -- # run_test_pid=84145 00:24:35.556 14:07:15 -- host/failover.sh@92 -- # wait 84145 00:24:36.492 0 00:24:36.492 14:07:16 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:36.492 [2024-04-26 14:07:09.345425] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:36.492 [2024-04-26 14:07:09.345555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84008 ] 00:24:36.492 [2024-04-26 14:07:09.501701] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.492 [2024-04-26 14:07:09.787730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.492 [2024-04-26 14:07:11.808424] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:36.492 [2024-04-26 14:07:11.808624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.492 [2024-04-26 14:07:11.808653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.492 [2024-04-26 14:07:11.808684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.492 [2024-04-26 14:07:11.808702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.492 [2024-04-26 14:07:11.808720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.492 [2024-04-26 14:07:11.808738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.492 [2024-04-26 14:07:11.808756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:36.492 [2024-04-26 14:07:11.808772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:36.492 [2024-04-26 14:07:11.808789] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:36.492 [2024-04-26 14:07:11.808877] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:36.492 [2024-04-26 14:07:11.808921] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:24:36.492 [2024-04-26 14:07:11.816429] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:36.492 Running I/O for 1 seconds... 00:24:36.492 00:24:36.492 Latency(us) 00:24:36.492 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.492 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:36.492 Verification LBA range: start 0x0 length 0x4000 00:24:36.492 NVMe0n1 : 1.01 8905.92 34.79 0.00 0.00 14304.56 1572.60 14528.46 00:24:36.492 =================================================================================================================== 00:24:36.492 Total : 8905.92 34.79 0.00 0.00 14304.56 1572.60 14528.46 00:24:36.492 14:07:16 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:36.492 14:07:16 -- host/failover.sh@95 -- # grep -q NVMe0 00:24:36.750 14:07:16 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:37.008 14:07:16 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:37.008 14:07:16 -- host/failover.sh@99 -- # grep -q NVMe0 00:24:37.267 14:07:16 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:37.526 14:07:16 -- host/failover.sh@101 -- # sleep 3 00:24:40.890 14:07:19 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.890 14:07:19 -- host/failover.sh@103 -- # grep -q NVMe0 00:24:40.890 14:07:20 -- host/failover.sh@108 -- # killprocess 84008 00:24:40.890 14:07:20 -- common/autotest_common.sh@936 -- # '[' -z 84008 ']' 00:24:40.891 14:07:20 -- common/autotest_common.sh@940 -- # kill -0 84008 00:24:40.891 14:07:20 -- common/autotest_common.sh@941 -- # uname 00:24:40.891 14:07:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:40.891 14:07:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84008 00:24:40.891 killing process with pid 84008 00:24:40.891 14:07:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:40.891 14:07:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:40.891 14:07:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84008' 00:24:40.891 14:07:20 -- common/autotest_common.sh@955 -- # kill 84008 00:24:40.891 14:07:20 -- common/autotest_common.sh@960 -- # wait 84008 00:24:42.263 14:07:21 -- host/failover.sh@110 -- # sync 00:24:42.263 14:07:21 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:42.263 14:07:21 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:42.263 14:07:21 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:42.263 14:07:21 -- host/failover.sh@116 -- # nvmftestfini 00:24:42.263 14:07:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:42.263 14:07:21 -- nvmf/common.sh@117 -- # sync 00:24:42.263 14:07:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:42.263 14:07:21 -- nvmf/common.sh@120 -- # set +e 00:24:42.263 14:07:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:42.263 14:07:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:42.263 rmmod nvme_tcp 00:24:42.263 rmmod nvme_fabrics 00:24:42.263 rmmod nvme_keyring 00:24:42.263 14:07:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:42.263 14:07:21 -- nvmf/common.sh@124 -- # set -e 00:24:42.263 14:07:21 -- nvmf/common.sh@125 -- # return 0 00:24:42.263 14:07:21 -- nvmf/common.sh@478 -- # '[' -n 83631 ']' 00:24:42.263 14:07:21 -- nvmf/common.sh@479 -- # killprocess 83631 00:24:42.263 14:07:21 -- common/autotest_common.sh@936 -- # '[' -z 83631 ']' 00:24:42.263 14:07:21 -- common/autotest_common.sh@940 -- # kill -0 83631 00:24:42.263 14:07:21 -- common/autotest_common.sh@941 -- # uname 00:24:42.263 14:07:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:42.263 14:07:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83631 00:24:42.521 killing process with pid 83631 00:24:42.521 14:07:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:42.521 14:07:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:42.521 14:07:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83631' 00:24:42.521 14:07:21 -- common/autotest_common.sh@955 -- # kill 83631 00:24:42.521 14:07:21 -- common/autotest_common.sh@960 -- # wait 83631 00:24:43.946 14:07:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:43.946 14:07:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:43.946 14:07:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:43.946 14:07:23 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:43.946 14:07:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:43.946 14:07:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.946 14:07:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:43.946 14:07:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.204 14:07:23 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:44.204 00:24:44.204 real 0m35.373s 00:24:44.204 user 2m12.144s 00:24:44.204 sys 0m5.786s 00:24:44.204 14:07:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:44.204 ************************************ 00:24:44.204 END TEST nvmf_failover 00:24:44.204 ************************************ 00:24:44.204 14:07:23 -- common/autotest_common.sh@10 -- # set +x 00:24:44.204 14:07:23 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:44.204 14:07:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:44.204 14:07:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:44.204 14:07:23 -- common/autotest_common.sh@10 -- # set +x 00:24:44.204 ************************************ 00:24:44.204 START TEST nvmf_discovery 00:24:44.204 ************************************ 00:24:44.204 14:07:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:44.463 * Looking for test storage... 00:24:44.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:44.463 14:07:23 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:44.463 14:07:23 -- nvmf/common.sh@7 -- # uname -s 00:24:44.463 14:07:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.463 14:07:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.463 14:07:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.463 14:07:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.463 14:07:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.463 14:07:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.463 14:07:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.463 14:07:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.463 14:07:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.463 14:07:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.463 14:07:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:24:44.463 14:07:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:24:44.463 14:07:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.463 14:07:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.463 14:07:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:44.463 14:07:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.463 14:07:23 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:44.463 14:07:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.463 14:07:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.463 14:07:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.463 14:07:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.463 14:07:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.463 14:07:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.463 14:07:23 -- paths/export.sh@5 -- # export PATH 00:24:44.463 14:07:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.463 14:07:23 -- nvmf/common.sh@47 -- # : 0 00:24:44.463 14:07:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:44.463 14:07:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:44.463 14:07:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.463 14:07:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.463 14:07:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.463 14:07:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:44.463 14:07:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:44.463 14:07:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:44.463 14:07:23 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:44.463 14:07:23 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:44.463 14:07:23 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:44.463 14:07:23 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:44.463 14:07:23 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:44.463 14:07:23 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:44.463 14:07:23 -- host/discovery.sh@25 -- # nvmftestinit 00:24:44.463 14:07:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:44.463 14:07:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.463 14:07:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:44.463 14:07:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:44.463 14:07:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:44.463 14:07:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.463 14:07:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:44.463 14:07:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.463 14:07:23 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:44.463 14:07:23 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:44.463 14:07:23 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:44.463 14:07:23 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:44.464 14:07:23 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:44.464 14:07:23 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:44.464 14:07:23 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.464 14:07:23 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.464 14:07:23 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:44.464 14:07:23 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:44.464 14:07:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:44.464 14:07:23 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:44.464 14:07:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:44.464 14:07:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.464 14:07:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:44.464 14:07:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:44.464 14:07:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:44.464 14:07:23 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:44.464 14:07:23 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:44.464 14:07:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:44.464 Cannot find device "nvmf_tgt_br" 00:24:44.464 14:07:24 -- nvmf/common.sh@155 -- # true 00:24:44.464 14:07:24 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:44.464 Cannot find device "nvmf_tgt_br2" 00:24:44.464 14:07:24 -- nvmf/common.sh@156 -- # true 00:24:44.464 14:07:24 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:44.464 14:07:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:44.464 Cannot find device "nvmf_tgt_br" 00:24:44.464 14:07:24 -- nvmf/common.sh@158 -- # true 00:24:44.464 14:07:24 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:44.464 Cannot find device "nvmf_tgt_br2" 00:24:44.464 14:07:24 -- nvmf/common.sh@159 -- # true 00:24:44.464 14:07:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:44.464 14:07:24 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:44.464 14:07:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:44.464 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:44.464 14:07:24 -- nvmf/common.sh@162 -- # true 00:24:44.464 14:07:24 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:44.722 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:44.722 14:07:24 -- nvmf/common.sh@163 -- # true 00:24:44.722 14:07:24 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:44.722 14:07:24 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:44.722 14:07:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:44.722 14:07:24 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:44.722 14:07:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:44.722 14:07:24 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:44.722 14:07:24 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:44.722 14:07:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:44.722 14:07:24 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:44.722 14:07:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:44.722 14:07:24 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:44.722 14:07:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:44.722 14:07:24 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:44.722 14:07:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:44.722 14:07:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:44.722 14:07:24 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:44.722 14:07:24 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:44.722 14:07:24 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:44.722 14:07:24 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:44.722 14:07:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:44.722 14:07:24 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:44.722 14:07:24 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:44.722 14:07:24 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:44.722 14:07:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:44.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:24:44.722 00:24:44.722 --- 10.0.0.2 ping statistics --- 00:24:44.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.722 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:24:44.722 14:07:24 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:44.722 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:44.722 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:24:44.722 00:24:44.722 --- 10.0.0.3 ping statistics --- 00:24:44.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.722 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:24:44.722 14:07:24 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:44.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:24:44.722 00:24:44.722 --- 10.0.0.1 ping statistics --- 00:24:44.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.722 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:24:44.722 14:07:24 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.722 14:07:24 -- nvmf/common.sh@422 -- # return 0 00:24:44.722 14:07:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:44.722 14:07:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.722 14:07:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:44.722 14:07:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:44.722 14:07:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.722 14:07:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:44.722 14:07:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:44.980 14:07:24 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:44.980 14:07:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:44.980 14:07:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:44.980 14:07:24 -- common/autotest_common.sh@10 -- # set +x 00:24:44.980 14:07:24 -- nvmf/common.sh@470 -- # nvmfpid=84480 00:24:44.980 14:07:24 -- nvmf/common.sh@471 -- # waitforlisten 84480 00:24:44.980 14:07:24 -- common/autotest_common.sh@817 -- # '[' -z 84480 ']' 00:24:44.980 14:07:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.980 14:07:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:44.980 14:07:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.980 14:07:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:44.980 14:07:24 -- common/autotest_common.sh@10 -- # set +x 00:24:44.980 14:07:24 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:44.980 [2024-04-26 14:07:24.513397] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:44.980 [2024-04-26 14:07:24.513518] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.238 [2024-04-26 14:07:24.684298] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.498 [2024-04-26 14:07:24.971799] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.498 [2024-04-26 14:07:24.971872] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.498 [2024-04-26 14:07:24.971890] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.498 [2024-04-26 14:07:24.971917] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.498 [2024-04-26 14:07:24.971931] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.498 [2024-04-26 14:07:24.971988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.066 14:07:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:46.066 14:07:25 -- common/autotest_common.sh@850 -- # return 0 00:24:46.066 14:07:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:46.066 14:07:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:46.066 14:07:25 -- common/autotest_common.sh@10 -- # set +x 00:24:46.066 14:07:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.066 14:07:25 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:46.066 14:07:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.066 14:07:25 -- common/autotest_common.sh@10 -- # set +x 00:24:46.066 [2024-04-26 14:07:25.519084] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.066 14:07:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.066 14:07:25 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:46.066 14:07:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.066 14:07:25 -- common/autotest_common.sh@10 -- # set +x 00:24:46.066 [2024-04-26 14:07:25.531308] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:46.066 14:07:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.066 14:07:25 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:46.066 14:07:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.066 14:07:25 -- common/autotest_common.sh@10 -- # set +x 00:24:46.066 null0 00:24:46.066 14:07:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.066 14:07:25 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:46.066 14:07:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.066 14:07:25 -- common/autotest_common.sh@10 -- # set +x 00:24:46.066 null1 00:24:46.066 14:07:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.066 14:07:25 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:46.066 14:07:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.066 14:07:25 -- common/autotest_common.sh@10 -- # set +x 00:24:46.066 14:07:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.066 14:07:25 -- host/discovery.sh@45 -- # hostpid=84530 00:24:46.066 14:07:25 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:46.066 14:07:25 -- host/discovery.sh@46 -- # waitforlisten 84530 /tmp/host.sock 00:24:46.066 14:07:25 -- common/autotest_common.sh@817 -- # '[' -z 84530 ']' 00:24:46.066 14:07:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:24:46.067 14:07:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:46.067 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:46.067 14:07:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:46.067 14:07:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:46.067 14:07:25 -- common/autotest_common.sh@10 -- # set +x 00:24:46.067 [2024-04-26 14:07:25.667374] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:46.067 [2024-04-26 14:07:25.667497] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84530 ] 00:24:46.326 [2024-04-26 14:07:25.850086] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.585 [2024-04-26 14:07:26.086608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.845 14:07:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:46.845 14:07:26 -- common/autotest_common.sh@850 -- # return 0 00:24:46.845 14:07:26 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:46.845 14:07:26 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:46.845 14:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.845 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:46.845 14:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.845 14:07:26 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:46.845 14:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.845 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:47.105 14:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.105 14:07:26 -- host/discovery.sh@72 -- # notify_id=0 00:24:47.105 14:07:26 -- host/discovery.sh@83 -- # get_subsystem_names 00:24:47.105 14:07:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:47.105 14:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.105 14:07:26 -- host/discovery.sh@59 -- # sort 00:24:47.105 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:47.105 14:07:26 -- host/discovery.sh@59 -- # xargs 00:24:47.105 14:07:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:47.105 14:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.105 14:07:26 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:47.105 14:07:26 -- host/discovery.sh@84 -- # get_bdev_list 00:24:47.105 14:07:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:47.105 14:07:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.105 14:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.105 14:07:26 -- host/discovery.sh@55 -- # xargs 00:24:47.105 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:47.105 14:07:26 -- host/discovery.sh@55 -- # sort 00:24:47.105 14:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.105 14:07:26 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:47.105 14:07:26 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:47.105 14:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.105 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:47.105 14:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.105 14:07:26 -- host/discovery.sh@87 -- # get_subsystem_names 00:24:47.105 14:07:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:47.105 14:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.105 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:47.105 14:07:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:47.105 14:07:26 -- host/discovery.sh@59 -- # sort 00:24:47.105 14:07:26 -- host/discovery.sh@59 -- # xargs 00:24:47.105 14:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.105 14:07:26 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:47.105 14:07:26 -- host/discovery.sh@88 -- # get_bdev_list 00:24:47.105 14:07:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.105 14:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.105 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:47.105 14:07:26 -- host/discovery.sh@55 -- # sort 00:24:47.105 14:07:26 -- host/discovery.sh@55 -- # xargs 00:24:47.105 14:07:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:47.105 14:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.105 14:07:26 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:47.105 14:07:26 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:47.105 14:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.105 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:47.105 14:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.105 14:07:26 -- host/discovery.sh@91 -- # get_subsystem_names 00:24:47.105 14:07:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:47.105 14:07:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:47.105 14:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.105 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:47.105 14:07:26 -- host/discovery.sh@59 -- # xargs 00:24:47.105 14:07:26 -- host/discovery.sh@59 -- # sort 00:24:47.105 14:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.105 14:07:26 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:47.105 14:07:26 -- host/discovery.sh@92 -- # get_bdev_list 00:24:47.105 14:07:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.105 14:07:26 -- host/discovery.sh@55 -- # sort 00:24:47.105 14:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.105 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:47.105 14:07:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:47.105 14:07:26 -- host/discovery.sh@55 -- # xargs 00:24:47.364 14:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.364 14:07:26 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:47.364 14:07:26 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:47.364 14:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.364 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:47.364 [2024-04-26 14:07:26.830468] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.364 14:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.364 14:07:26 -- host/discovery.sh@97 -- # get_subsystem_names 00:24:47.364 14:07:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:47.364 14:07:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:47.364 14:07:26 -- host/discovery.sh@59 -- # sort 00:24:47.364 14:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.364 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:47.364 14:07:26 -- host/discovery.sh@59 -- # xargs 00:24:47.364 14:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.364 14:07:26 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:47.364 14:07:26 -- host/discovery.sh@98 -- # get_bdev_list 00:24:47.364 14:07:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:47.364 14:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.364 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:47.364 14:07:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:47.364 14:07:26 -- host/discovery.sh@55 -- # sort 00:24:47.364 14:07:26 -- host/discovery.sh@55 -- # xargs 00:24:47.364 14:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.364 14:07:26 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:47.364 14:07:26 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:47.364 14:07:26 -- host/discovery.sh@79 -- # expected_count=0 00:24:47.364 14:07:26 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:47.364 14:07:26 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:47.364 14:07:26 -- common/autotest_common.sh@901 -- # local max=10 00:24:47.364 14:07:26 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:47.364 14:07:26 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:47.364 14:07:26 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:47.364 14:07:26 -- host/discovery.sh@74 -- # jq '. | length' 00:24:47.364 14:07:26 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:47.364 14:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.364 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:47.364 14:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.364 14:07:26 -- host/discovery.sh@74 -- # notification_count=0 00:24:47.364 14:07:26 -- host/discovery.sh@75 -- # notify_id=0 00:24:47.364 14:07:26 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:47.364 14:07:26 -- common/autotest_common.sh@904 -- # return 0 00:24:47.364 14:07:26 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:47.364 14:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.364 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:47.364 14:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.364 14:07:26 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:47.364 14:07:26 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:47.364 14:07:26 -- common/autotest_common.sh@901 -- # local max=10 00:24:47.364 14:07:26 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:47.364 14:07:26 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:47.364 14:07:26 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:24:47.364 14:07:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:47.364 14:07:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:47.364 14:07:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.364 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:24:47.364 14:07:26 -- host/discovery.sh@59 -- # sort 00:24:47.364 14:07:26 -- host/discovery.sh@59 -- # xargs 00:24:47.364 14:07:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.364 14:07:27 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:24:47.364 14:07:27 -- common/autotest_common.sh@906 -- # sleep 1 00:24:47.932 [2024-04-26 14:07:27.520410] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:47.932 [2024-04-26 14:07:27.520460] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:47.932 [2024-04-26 14:07:27.520491] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:47.932 [2024-04-26 14:07:27.606421] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:48.191 [2024-04-26 14:07:27.670859] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:48.191 [2024-04-26 14:07:27.670907] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:48.458 14:07:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:48.458 14:07:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:48.458 14:07:28 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:24:48.458 14:07:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:48.458 14:07:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:48.458 14:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.458 14:07:28 -- common/autotest_common.sh@10 -- # set +x 00:24:48.458 14:07:28 -- host/discovery.sh@59 -- # sort 00:24:48.458 14:07:28 -- host/discovery.sh@59 -- # xargs 00:24:48.458 14:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.458 14:07:28 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.458 14:07:28 -- common/autotest_common.sh@904 -- # return 0 00:24:48.458 14:07:28 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:48.458 14:07:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:48.458 14:07:28 -- common/autotest_common.sh@901 -- # local max=10 00:24:48.458 14:07:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:48.458 14:07:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:48.458 14:07:28 -- common/autotest_common.sh@903 -- # get_bdev_list 00:24:48.458 14:07:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:48.458 14:07:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.458 14:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.458 14:07:28 -- common/autotest_common.sh@10 -- # set +x 00:24:48.458 14:07:28 -- host/discovery.sh@55 -- # sort 00:24:48.458 14:07:28 -- host/discovery.sh@55 -- # xargs 00:24:48.458 14:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.724 14:07:28 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:48.724 14:07:28 -- common/autotest_common.sh@904 -- # return 0 00:24:48.724 14:07:28 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:48.724 14:07:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:48.724 14:07:28 -- common/autotest_common.sh@901 -- # local max=10 00:24:48.724 14:07:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:48.724 14:07:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:48.724 14:07:28 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:24:48.724 14:07:28 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:48.724 14:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.724 14:07:28 -- common/autotest_common.sh@10 -- # set +x 00:24:48.724 14:07:28 -- host/discovery.sh@63 -- # sort -n 00:24:48.724 14:07:28 -- host/discovery.sh@63 -- # xargs 00:24:48.724 14:07:28 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:48.724 14:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.725 14:07:28 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:24:48.725 14:07:28 -- common/autotest_common.sh@904 -- # return 0 00:24:48.725 14:07:28 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:48.725 14:07:28 -- host/discovery.sh@79 -- # expected_count=1 00:24:48.725 14:07:28 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:48.725 14:07:28 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:48.725 14:07:28 -- common/autotest_common.sh@901 -- # local max=10 00:24:48.725 14:07:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:48.725 14:07:28 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:48.725 14:07:28 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:48.725 14:07:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:48.725 14:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.725 14:07:28 -- common/autotest_common.sh@10 -- # set +x 00:24:48.725 14:07:28 -- host/discovery.sh@74 -- # jq '. | length' 00:24:48.725 14:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.725 14:07:28 -- host/discovery.sh@74 -- # notification_count=1 00:24:48.725 14:07:28 -- host/discovery.sh@75 -- # notify_id=1 00:24:48.725 14:07:28 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:48.725 14:07:28 -- common/autotest_common.sh@904 -- # return 0 00:24:48.725 14:07:28 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:48.725 14:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.725 14:07:28 -- common/autotest_common.sh@10 -- # set +x 00:24:48.725 14:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.725 14:07:28 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:48.725 14:07:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:48.725 14:07:28 -- common/autotest_common.sh@901 -- # local max=10 00:24:48.725 14:07:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:48.725 14:07:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:48.725 14:07:28 -- common/autotest_common.sh@903 -- # get_bdev_list 00:24:48.725 14:07:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.725 14:07:28 -- host/discovery.sh@55 -- # xargs 00:24:48.725 14:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.725 14:07:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:48.725 14:07:28 -- common/autotest_common.sh@10 -- # set +x 00:24:48.725 14:07:28 -- host/discovery.sh@55 -- # sort 00:24:48.725 14:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.725 14:07:28 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:48.725 14:07:28 -- common/autotest_common.sh@904 -- # return 0 00:24:48.725 14:07:28 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:48.725 14:07:28 -- host/discovery.sh@79 -- # expected_count=1 00:24:48.725 14:07:28 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:48.725 14:07:28 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:48.725 14:07:28 -- common/autotest_common.sh@901 -- # local max=10 00:24:48.725 14:07:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:48.725 14:07:28 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:48.725 14:07:28 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:48.725 14:07:28 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:48.725 14:07:28 -- host/discovery.sh@74 -- # jq '. | length' 00:24:48.725 14:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.725 14:07:28 -- common/autotest_common.sh@10 -- # set +x 00:24:48.725 14:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.725 14:07:28 -- host/discovery.sh@74 -- # notification_count=1 00:24:48.725 14:07:28 -- host/discovery.sh@75 -- # notify_id=2 00:24:48.725 14:07:28 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:48.725 14:07:28 -- common/autotest_common.sh@904 -- # return 0 00:24:48.725 14:07:28 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:48.725 14:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.725 14:07:28 -- common/autotest_common.sh@10 -- # set +x 00:24:48.725 [2024-04-26 14:07:28.350829] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:48.725 [2024-04-26 14:07:28.351471] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:48.725 [2024-04-26 14:07:28.351515] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:48.725 14:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.725 14:07:28 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:48.725 14:07:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:48.725 14:07:28 -- common/autotest_common.sh@901 -- # local max=10 00:24:48.725 14:07:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:48.725 14:07:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:48.725 14:07:28 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:24:48.725 14:07:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:48.725 14:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.725 14:07:28 -- common/autotest_common.sh@10 -- # set +x 00:24:48.725 14:07:28 -- host/discovery.sh@59 -- # xargs 00:24:48.725 14:07:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:48.725 14:07:28 -- host/discovery.sh@59 -- # sort 00:24:48.725 14:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.984 14:07:28 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.984 14:07:28 -- common/autotest_common.sh@904 -- # return 0 00:24:48.984 14:07:28 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:48.984 14:07:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:48.984 14:07:28 -- common/autotest_common.sh@901 -- # local max=10 00:24:48.984 14:07:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:48.984 14:07:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:48.984 14:07:28 -- common/autotest_common.sh@903 -- # get_bdev_list 00:24:48.984 14:07:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:48.984 14:07:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.984 14:07:28 -- host/discovery.sh@55 -- # sort 00:24:48.984 14:07:28 -- host/discovery.sh@55 -- # xargs 00:24:48.984 14:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.984 14:07:28 -- common/autotest_common.sh@10 -- # set +x 00:24:48.984 [2024-04-26 14:07:28.437602] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:48.984 14:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.984 14:07:28 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:48.984 14:07:28 -- common/autotest_common.sh@904 -- # return 0 00:24:48.984 14:07:28 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:48.984 14:07:28 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:48.984 14:07:28 -- common/autotest_common.sh@901 -- # local max=10 00:24:48.984 14:07:28 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:48.984 14:07:28 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:48.984 14:07:28 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:24:48.984 14:07:28 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:48.984 14:07:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:48.984 14:07:28 -- common/autotest_common.sh@10 -- # set +x 00:24:48.984 14:07:28 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:48.984 14:07:28 -- host/discovery.sh@63 -- # sort -n 00:24:48.984 14:07:28 -- host/discovery.sh@63 -- # xargs 00:24:48.984 14:07:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:48.984 [2024-04-26 14:07:28.500996] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:48.984 [2024-04-26 14:07:28.501037] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:48.984 [2024-04-26 14:07:28.501048] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:48.984 14:07:28 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:48.984 14:07:28 -- common/autotest_common.sh@906 -- # sleep 1 00:24:49.921 14:07:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:49.921 14:07:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:49.921 14:07:29 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:24:49.921 14:07:29 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:49.921 14:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:49.921 14:07:29 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:49.921 14:07:29 -- common/autotest_common.sh@10 -- # set +x 00:24:49.921 14:07:29 -- host/discovery.sh@63 -- # sort -n 00:24:49.921 14:07:29 -- host/discovery.sh@63 -- # xargs 00:24:49.921 14:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:49.921 14:07:29 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:49.921 14:07:29 -- common/autotest_common.sh@904 -- # return 0 00:24:49.921 14:07:29 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:49.921 14:07:29 -- host/discovery.sh@79 -- # expected_count=0 00:24:49.921 14:07:29 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:49.921 14:07:29 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:49.921 14:07:29 -- common/autotest_common.sh@901 -- # local max=10 00:24:49.921 14:07:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:49.921 14:07:29 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:49.921 14:07:29 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:50.180 14:07:29 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:50.180 14:07:29 -- host/discovery.sh@74 -- # jq '. | length' 00:24:50.180 14:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.180 14:07:29 -- common/autotest_common.sh@10 -- # set +x 00:24:50.180 14:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.180 14:07:29 -- host/discovery.sh@74 -- # notification_count=0 00:24:50.180 14:07:29 -- host/discovery.sh@75 -- # notify_id=2 00:24:50.180 14:07:29 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:50.180 14:07:29 -- common/autotest_common.sh@904 -- # return 0 00:24:50.180 14:07:29 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:50.180 14:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.180 14:07:29 -- common/autotest_common.sh@10 -- # set +x 00:24:50.180 [2024-04-26 14:07:29.643249] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:50.180 [2024-04-26 14:07:29.643310] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:50.180 [2024-04-26 14:07:29.644814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.180 [2024-04-26 14:07:29.644856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.180 [2024-04-26 14:07:29.644872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.180 [2024-04-26 14:07:29.644885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.180 [2024-04-26 14:07:29.644898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.180 [2024-04-26 14:07:29.644917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.180 [2024-04-26 14:07:29.644929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.180 [2024-04-26 14:07:29.644941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.180 [2024-04-26 14:07:29.644953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:24:50.180 14:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.180 14:07:29 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:50.180 14:07:29 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:50.180 14:07:29 -- common/autotest_common.sh@901 -- # local max=10 00:24:50.180 14:07:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:50.180 14:07:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:50.180 14:07:29 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:24:50.180 14:07:29 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.180 14:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.180 14:07:29 -- common/autotest_common.sh@10 -- # set +x 00:24:50.180 14:07:29 -- host/discovery.sh@59 -- # sort 00:24:50.180 14:07:29 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.180 [2024-04-26 14:07:29.654750] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:24:50.180 14:07:29 -- host/discovery.sh@59 -- # xargs 00:24:50.180 [2024-04-26 14:07:29.664753] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:50.180 [2024-04-26 14:07:29.664894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-04-26 14:07:29.664940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-04-26 14:07:29.664957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:24:50.180 [2024-04-26 14:07:29.664976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:24:50.180 [2024-04-26 14:07:29.665001] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:24:50.180 [2024-04-26 14:07:29.665019] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:50.180 [2024-04-26 14:07:29.665031] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:50.180 [2024-04-26 14:07:29.665044] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:50.180 [2024-04-26 14:07:29.665063] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.180 [2024-04-26 14:07:29.674818] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:50.180 [2024-04-26 14:07:29.674916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-04-26 14:07:29.674958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-04-26 14:07:29.674973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:24:50.180 [2024-04-26 14:07:29.674986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:24:50.180 [2024-04-26 14:07:29.675005] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:24:50.180 [2024-04-26 14:07:29.675021] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:50.180 [2024-04-26 14:07:29.675032] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:50.180 [2024-04-26 14:07:29.675057] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:50.180 [2024-04-26 14:07:29.675074] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.180 14:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.180 [2024-04-26 14:07:29.684867] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:50.180 [2024-04-26 14:07:29.684977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-04-26 14:07:29.685020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-04-26 14:07:29.685035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:24:50.180 [2024-04-26 14:07:29.685048] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:24:50.180 [2024-04-26 14:07:29.685066] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:24:50.180 [2024-04-26 14:07:29.685082] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:50.180 [2024-04-26 14:07:29.685093] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:50.180 [2024-04-26 14:07:29.685105] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:50.180 [2024-04-26 14:07:29.685121] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.180 [2024-04-26 14:07:29.694930] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:50.180 [2024-04-26 14:07:29.695028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-04-26 14:07:29.695069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-04-26 14:07:29.695084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:24:50.180 [2024-04-26 14:07:29.695098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:24:50.180 [2024-04-26 14:07:29.695116] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:24:50.180 [2024-04-26 14:07:29.695132] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:50.180 [2024-04-26 14:07:29.695142] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:50.180 [2024-04-26 14:07:29.695168] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:50.180 [2024-04-26 14:07:29.695187] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.180 14:07:29 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.180 14:07:29 -- common/autotest_common.sh@904 -- # return 0 00:24:50.180 14:07:29 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:50.180 14:07:29 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:50.180 14:07:29 -- common/autotest_common.sh@901 -- # local max=10 00:24:50.180 14:07:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:50.180 14:07:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:50.180 [2024-04-26 14:07:29.704980] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:50.180 [2024-04-26 14:07:29.705070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-04-26 14:07:29.705111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-04-26 14:07:29.705125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:24:50.180 [2024-04-26 14:07:29.705137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:24:50.180 [2024-04-26 14:07:29.705168] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:24:50.180 [2024-04-26 14:07:29.705183] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:50.180 [2024-04-26 14:07:29.705194] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:50.180 [2024-04-26 14:07:29.705205] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:50.180 [2024-04-26 14:07:29.705222] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.180 14:07:29 -- common/autotest_common.sh@903 -- # get_bdev_list 00:24:50.180 14:07:29 -- host/discovery.sh@55 -- # sort 00:24:50.180 14:07:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.180 14:07:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.180 14:07:29 -- host/discovery.sh@55 -- # xargs 00:24:50.180 14:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.180 14:07:29 -- common/autotest_common.sh@10 -- # set +x 00:24:50.180 [2024-04-26 14:07:29.715023] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:50.180 [2024-04-26 14:07:29.715123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-04-26 14:07:29.715179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-04-26 14:07:29.715196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:24:50.180 [2024-04-26 14:07:29.715208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:24:50.180 [2024-04-26 14:07:29.715227] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:24:50.180 [2024-04-26 14:07:29.715252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:50.180 [2024-04-26 14:07:29.715263] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:50.180 [2024-04-26 14:07:29.715275] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:50.180 [2024-04-26 14:07:29.715291] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.180 [2024-04-26 14:07:29.725078] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:50.180 [2024-04-26 14:07:29.725184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-04-26 14:07:29.725226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.180 [2024-04-26 14:07:29.725241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:24:50.180 [2024-04-26 14:07:29.725254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:24:50.180 [2024-04-26 14:07:29.725273] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:24:50.180 [2024-04-26 14:07:29.725289] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:50.180 [2024-04-26 14:07:29.725300] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:50.180 [2024-04-26 14:07:29.725311] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:50.180 [2024-04-26 14:07:29.725328] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.180 [2024-04-26 14:07:29.730861] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:50.180 [2024-04-26 14:07:29.730901] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:50.180 14:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.180 14:07:29 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:50.180 14:07:29 -- common/autotest_common.sh@904 -- # return 0 00:24:50.180 14:07:29 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:50.181 14:07:29 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:50.181 14:07:29 -- common/autotest_common.sh@901 -- # local max=10 00:24:50.181 14:07:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:50.181 14:07:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:50.181 14:07:29 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:24:50.181 14:07:29 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:50.181 14:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.181 14:07:29 -- common/autotest_common.sh@10 -- # set +x 00:24:50.181 14:07:29 -- host/discovery.sh@63 -- # sort -n 00:24:50.181 14:07:29 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:50.181 14:07:29 -- host/discovery.sh@63 -- # xargs 00:24:50.181 14:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.181 14:07:29 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:24:50.181 14:07:29 -- common/autotest_common.sh@904 -- # return 0 00:24:50.181 14:07:29 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:50.181 14:07:29 -- host/discovery.sh@79 -- # expected_count=0 00:24:50.181 14:07:29 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:50.181 14:07:29 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:50.181 14:07:29 -- common/autotest_common.sh@901 -- # local max=10 00:24:50.181 14:07:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:50.181 14:07:29 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:50.181 14:07:29 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:50.181 14:07:29 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:50.181 14:07:29 -- host/discovery.sh@74 -- # jq '. | length' 00:24:50.181 14:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.181 14:07:29 -- common/autotest_common.sh@10 -- # set +x 00:24:50.181 14:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.439 14:07:29 -- host/discovery.sh@74 -- # notification_count=0 00:24:50.439 14:07:29 -- host/discovery.sh@75 -- # notify_id=2 00:24:50.439 14:07:29 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:50.439 14:07:29 -- common/autotest_common.sh@904 -- # return 0 00:24:50.439 14:07:29 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:50.439 14:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.439 14:07:29 -- common/autotest_common.sh@10 -- # set +x 00:24:50.439 14:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.439 14:07:29 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:50.439 14:07:29 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:50.439 14:07:29 -- common/autotest_common.sh@901 -- # local max=10 00:24:50.439 14:07:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:50.439 14:07:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:50.439 14:07:29 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:24:50.439 14:07:29 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:50.439 14:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.439 14:07:29 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:50.439 14:07:29 -- host/discovery.sh@59 -- # xargs 00:24:50.439 14:07:29 -- common/autotest_common.sh@10 -- # set +x 00:24:50.439 14:07:29 -- host/discovery.sh@59 -- # sort 00:24:50.439 14:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.439 14:07:29 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:24:50.439 14:07:29 -- common/autotest_common.sh@904 -- # return 0 00:24:50.439 14:07:29 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:50.439 14:07:29 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:50.439 14:07:29 -- common/autotest_common.sh@901 -- # local max=10 00:24:50.439 14:07:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:50.439 14:07:29 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:50.439 14:07:29 -- common/autotest_common.sh@903 -- # get_bdev_list 00:24:50.439 14:07:29 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.439 14:07:29 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:50.439 14:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.439 14:07:29 -- host/discovery.sh@55 -- # xargs 00:24:50.439 14:07:29 -- common/autotest_common.sh@10 -- # set +x 00:24:50.439 14:07:29 -- host/discovery.sh@55 -- # sort 00:24:50.439 14:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.439 14:07:29 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:24:50.439 14:07:29 -- common/autotest_common.sh@904 -- # return 0 00:24:50.439 14:07:29 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:50.439 14:07:29 -- host/discovery.sh@79 -- # expected_count=2 00:24:50.439 14:07:29 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:50.439 14:07:29 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:50.439 14:07:29 -- common/autotest_common.sh@901 -- # local max=10 00:24:50.439 14:07:29 -- common/autotest_common.sh@902 -- # (( max-- )) 00:24:50.439 14:07:29 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:50.439 14:07:29 -- common/autotest_common.sh@903 -- # get_notification_count 00:24:50.439 14:07:29 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:50.439 14:07:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.439 14:07:29 -- common/autotest_common.sh@10 -- # set +x 00:24:50.439 14:07:29 -- host/discovery.sh@74 -- # jq '. | length' 00:24:50.439 14:07:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:50.439 14:07:30 -- host/discovery.sh@74 -- # notification_count=2 00:24:50.439 14:07:30 -- host/discovery.sh@75 -- # notify_id=4 00:24:50.440 14:07:30 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:24:50.440 14:07:30 -- common/autotest_common.sh@904 -- # return 0 00:24:50.440 14:07:30 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:50.440 14:07:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:50.440 14:07:30 -- common/autotest_common.sh@10 -- # set +x 00:24:51.396 [2024-04-26 14:07:31.045032] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:51.396 [2024-04-26 14:07:31.045085] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:51.396 [2024-04-26 14:07:31.045117] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:51.655 [2024-04-26 14:07:31.132042] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:51.655 [2024-04-26 14:07:31.200786] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:51.655 [2024-04-26 14:07:31.200853] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:51.655 14:07:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.655 14:07:31 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:51.655 14:07:31 -- common/autotest_common.sh@638 -- # local es=0 00:24:51.655 14:07:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:51.655 14:07:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:51.655 14:07:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:51.655 14:07:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:51.655 14:07:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:51.655 14:07:31 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:51.655 14:07:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.655 14:07:31 -- common/autotest_common.sh@10 -- # set +x 00:24:51.655 2024/04/26 14:07:31 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:24:51.655 request: 00:24:51.655 { 00:24:51.655 "method": "bdev_nvme_start_discovery", 00:24:51.656 "params": { 00:24:51.656 "name": "nvme", 00:24:51.656 "trtype": "tcp", 00:24:51.656 "traddr": "10.0.0.2", 00:24:51.656 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:51.656 "adrfam": "ipv4", 00:24:51.656 "trsvcid": "8009", 00:24:51.656 "wait_for_attach": true 00:24:51.656 } 00:24:51.656 } 00:24:51.656 Got JSON-RPC error response 00:24:51.656 GoRPCClient: error on JSON-RPC call 00:24:51.656 14:07:31 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:51.656 14:07:31 -- common/autotest_common.sh@641 -- # es=1 00:24:51.656 14:07:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:51.656 14:07:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:51.656 14:07:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:51.656 14:07:31 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:51.656 14:07:31 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:51.656 14:07:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.656 14:07:31 -- common/autotest_common.sh@10 -- # set +x 00:24:51.656 14:07:31 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:51.656 14:07:31 -- host/discovery.sh@67 -- # sort 00:24:51.656 14:07:31 -- host/discovery.sh@67 -- # xargs 00:24:51.656 14:07:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.656 14:07:31 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:51.656 14:07:31 -- host/discovery.sh@146 -- # get_bdev_list 00:24:51.656 14:07:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.656 14:07:31 -- host/discovery.sh@55 -- # sort 00:24:51.656 14:07:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.656 14:07:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:51.656 14:07:31 -- host/discovery.sh@55 -- # xargs 00:24:51.656 14:07:31 -- common/autotest_common.sh@10 -- # set +x 00:24:51.915 14:07:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.915 14:07:31 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:51.915 14:07:31 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:51.915 14:07:31 -- common/autotest_common.sh@638 -- # local es=0 00:24:51.915 14:07:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:51.915 14:07:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:51.915 14:07:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:51.915 14:07:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:51.915 14:07:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:51.915 14:07:31 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:51.915 14:07:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.915 14:07:31 -- common/autotest_common.sh@10 -- # set +x 00:24:51.915 2024/04/26 14:07:31 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:24:51.915 request: 00:24:51.915 { 00:24:51.915 "method": "bdev_nvme_start_discovery", 00:24:51.915 "params": { 00:24:51.915 "name": "nvme_second", 00:24:51.915 "trtype": "tcp", 00:24:51.915 "traddr": "10.0.0.2", 00:24:51.915 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:51.915 "adrfam": "ipv4", 00:24:51.915 "trsvcid": "8009", 00:24:51.915 "wait_for_attach": true 00:24:51.915 } 00:24:51.915 } 00:24:51.915 Got JSON-RPC error response 00:24:51.915 GoRPCClient: error on JSON-RPC call 00:24:51.915 14:07:31 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:51.915 14:07:31 -- common/autotest_common.sh@641 -- # es=1 00:24:51.915 14:07:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:51.915 14:07:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:51.915 14:07:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:51.915 14:07:31 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:51.915 14:07:31 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:51.915 14:07:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.915 14:07:31 -- host/discovery.sh@67 -- # sort 00:24:51.915 14:07:31 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:51.915 14:07:31 -- common/autotest_common.sh@10 -- # set +x 00:24:51.915 14:07:31 -- host/discovery.sh@67 -- # xargs 00:24:51.915 14:07:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.915 14:07:31 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:51.915 14:07:31 -- host/discovery.sh@152 -- # get_bdev_list 00:24:51.915 14:07:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.915 14:07:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:51.915 14:07:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.915 14:07:31 -- common/autotest_common.sh@10 -- # set +x 00:24:51.915 14:07:31 -- host/discovery.sh@55 -- # xargs 00:24:51.915 14:07:31 -- host/discovery.sh@55 -- # sort 00:24:51.915 14:07:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:51.915 14:07:31 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:51.915 14:07:31 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:51.915 14:07:31 -- common/autotest_common.sh@638 -- # local es=0 00:24:51.915 14:07:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:51.915 14:07:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:51.915 14:07:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:51.915 14:07:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:51.915 14:07:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:51.915 14:07:31 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:51.915 14:07:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:51.915 14:07:31 -- common/autotest_common.sh@10 -- # set +x 00:24:52.853 [2024-04-26 14:07:32.483317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.853 [2024-04-26 14:07:32.483409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.853 [2024-04-26 14:07:32.483430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010640 with addr=10.0.0.2, port=8010 00:24:52.853 [2024-04-26 14:07:32.483495] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:52.853 [2024-04-26 14:07:32.483510] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:52.853 [2024-04-26 14:07:32.483523] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:54.275 [2024-04-26 14:07:33.481757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-04-26 14:07:33.481883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.275 [2024-04-26 14:07:33.481932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010840 with addr=10.0.0.2, port=8010 00:24:54.275 [2024-04-26 14:07:33.482013] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:54.275 [2024-04-26 14:07:33.482034] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:54.275 [2024-04-26 14:07:33.482052] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:54.846 [2024-04-26 14:07:34.479845] bdev_nvme.c:6966:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:54.846 2024/04/26 14:07:34 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:24:54.846 request: 00:24:54.846 { 00:24:54.846 "method": "bdev_nvme_start_discovery", 00:24:54.846 "params": { 00:24:54.846 "name": "nvme_second", 00:24:54.846 "trtype": "tcp", 00:24:54.846 "traddr": "10.0.0.2", 00:24:54.846 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:54.846 "adrfam": "ipv4", 00:24:54.846 "trsvcid": "8010", 00:24:54.846 "attach_timeout_ms": 3000 00:24:54.846 } 00:24:54.846 } 00:24:54.846 Got JSON-RPC error response 00:24:54.846 GoRPCClient: error on JSON-RPC call 00:24:54.846 14:07:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:54.846 14:07:34 -- common/autotest_common.sh@641 -- # es=1 00:24:54.846 14:07:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:54.846 14:07:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:54.846 14:07:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:54.846 14:07:34 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:54.846 14:07:34 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:54.846 14:07:34 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:54.846 14:07:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:54.846 14:07:34 -- host/discovery.sh@67 -- # sort 00:24:54.846 14:07:34 -- common/autotest_common.sh@10 -- # set +x 00:24:54.846 14:07:34 -- host/discovery.sh@67 -- # xargs 00:24:54.846 14:07:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:55.104 14:07:34 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:55.104 14:07:34 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:55.104 14:07:34 -- host/discovery.sh@161 -- # kill 84530 00:24:55.104 14:07:34 -- host/discovery.sh@162 -- # nvmftestfini 00:24:55.104 14:07:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:55.104 14:07:34 -- nvmf/common.sh@117 -- # sync 00:24:55.104 14:07:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:55.104 14:07:34 -- nvmf/common.sh@120 -- # set +e 00:24:55.104 14:07:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:55.104 14:07:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:55.104 rmmod nvme_tcp 00:24:55.104 rmmod nvme_fabrics 00:24:55.104 rmmod nvme_keyring 00:24:55.104 14:07:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:55.104 14:07:34 -- nvmf/common.sh@124 -- # set -e 00:24:55.104 14:07:34 -- nvmf/common.sh@125 -- # return 0 00:24:55.104 14:07:34 -- nvmf/common.sh@478 -- # '[' -n 84480 ']' 00:24:55.104 14:07:34 -- nvmf/common.sh@479 -- # killprocess 84480 00:24:55.104 14:07:34 -- common/autotest_common.sh@936 -- # '[' -z 84480 ']' 00:24:55.104 14:07:34 -- common/autotest_common.sh@940 -- # kill -0 84480 00:24:55.104 14:07:34 -- common/autotest_common.sh@941 -- # uname 00:24:55.104 14:07:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:55.104 14:07:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84480 00:24:55.104 14:07:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:55.104 14:07:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:55.104 14:07:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84480' 00:24:55.104 killing process with pid 84480 00:24:55.105 14:07:34 -- common/autotest_common.sh@955 -- # kill 84480 00:24:55.105 14:07:34 -- common/autotest_common.sh@960 -- # wait 84480 00:24:56.485 14:07:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:56.485 14:07:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:56.485 14:07:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:56.485 14:07:35 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:56.485 14:07:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:56.486 14:07:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.486 14:07:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.486 14:07:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.486 14:07:35 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:56.486 00:24:56.486 real 0m12.200s 00:24:56.486 user 0m22.867s 00:24:56.486 sys 0m2.443s 00:24:56.486 14:07:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:56.486 14:07:35 -- common/autotest_common.sh@10 -- # set +x 00:24:56.486 ************************************ 00:24:56.486 END TEST nvmf_discovery 00:24:56.486 ************************************ 00:24:56.486 14:07:36 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:56.486 14:07:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:56.486 14:07:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:56.486 14:07:36 -- common/autotest_common.sh@10 -- # set +x 00:24:56.486 ************************************ 00:24:56.486 START TEST nvmf_discovery_remove_ifc 00:24:56.486 ************************************ 00:24:56.486 14:07:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:56.745 * Looking for test storage... 00:24:56.745 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:56.745 14:07:36 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:56.745 14:07:36 -- nvmf/common.sh@7 -- # uname -s 00:24:56.745 14:07:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.745 14:07:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.745 14:07:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.745 14:07:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.745 14:07:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.745 14:07:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.745 14:07:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.745 14:07:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.745 14:07:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.745 14:07:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.745 14:07:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:24:56.745 14:07:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:24:56.745 14:07:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.745 14:07:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.745 14:07:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:56.745 14:07:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.745 14:07:36 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:56.745 14:07:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.745 14:07:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.745 14:07:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.745 14:07:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.745 14:07:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.745 14:07:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.745 14:07:36 -- paths/export.sh@5 -- # export PATH 00:24:56.745 14:07:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.745 14:07:36 -- nvmf/common.sh@47 -- # : 0 00:24:56.745 14:07:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:56.745 14:07:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:56.745 14:07:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.745 14:07:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.745 14:07:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.745 14:07:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:56.745 14:07:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:56.745 14:07:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:56.745 14:07:36 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:56.745 14:07:36 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:56.745 14:07:36 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:56.745 14:07:36 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:56.745 14:07:36 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:56.745 14:07:36 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:56.745 14:07:36 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:56.745 14:07:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:56.745 14:07:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.745 14:07:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:56.745 14:07:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:56.746 14:07:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:56.746 14:07:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.746 14:07:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:56.746 14:07:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.746 14:07:36 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:56.746 14:07:36 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:56.746 14:07:36 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:56.746 14:07:36 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:56.746 14:07:36 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:56.746 14:07:36 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:56.746 14:07:36 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.746 14:07:36 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.746 14:07:36 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:56.746 14:07:36 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:56.746 14:07:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:56.746 14:07:36 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:56.746 14:07:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:56.746 14:07:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.746 14:07:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:56.746 14:07:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:56.746 14:07:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:56.746 14:07:36 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:56.746 14:07:36 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:56.746 14:07:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:56.746 Cannot find device "nvmf_tgt_br" 00:24:56.746 14:07:36 -- nvmf/common.sh@155 -- # true 00:24:56.746 14:07:36 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:56.746 Cannot find device "nvmf_tgt_br2" 00:24:56.746 14:07:36 -- nvmf/common.sh@156 -- # true 00:24:56.746 14:07:36 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:56.746 14:07:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:57.005 Cannot find device "nvmf_tgt_br" 00:24:57.005 14:07:36 -- nvmf/common.sh@158 -- # true 00:24:57.005 14:07:36 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:57.005 Cannot find device "nvmf_tgt_br2" 00:24:57.005 14:07:36 -- nvmf/common.sh@159 -- # true 00:24:57.005 14:07:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:57.005 14:07:36 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:57.005 14:07:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:57.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:57.005 14:07:36 -- nvmf/common.sh@162 -- # true 00:24:57.005 14:07:36 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:57.005 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:57.005 14:07:36 -- nvmf/common.sh@163 -- # true 00:24:57.005 14:07:36 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:57.005 14:07:36 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:57.005 14:07:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:57.005 14:07:36 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:57.005 14:07:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:57.005 14:07:36 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:57.005 14:07:36 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:57.005 14:07:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:57.005 14:07:36 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:57.005 14:07:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:57.005 14:07:36 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:57.005 14:07:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:57.005 14:07:36 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:57.005 14:07:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:57.005 14:07:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:57.005 14:07:36 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:57.005 14:07:36 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:57.005 14:07:36 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:57.005 14:07:36 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:57.005 14:07:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:57.005 14:07:36 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:57.265 14:07:36 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:57.265 14:07:36 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:57.265 14:07:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:57.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:24:57.265 00:24:57.265 --- 10.0.0.2 ping statistics --- 00:24:57.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.265 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:24:57.265 14:07:36 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:57.265 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:57.265 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:24:57.265 00:24:57.265 --- 10.0.0.3 ping statistics --- 00:24:57.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.265 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:24:57.265 14:07:36 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:57.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:24:57.265 00:24:57.265 --- 10.0.0.1 ping statistics --- 00:24:57.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.265 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:24:57.265 14:07:36 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.265 14:07:36 -- nvmf/common.sh@422 -- # return 0 00:24:57.265 14:07:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:57.265 14:07:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.265 14:07:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:57.265 14:07:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:57.265 14:07:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.265 14:07:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:57.265 14:07:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:57.265 14:07:36 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:57.265 14:07:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:57.265 14:07:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:57.265 14:07:36 -- common/autotest_common.sh@10 -- # set +x 00:24:57.265 14:07:36 -- nvmf/common.sh@470 -- # nvmfpid=85025 00:24:57.265 14:07:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:57.265 14:07:36 -- nvmf/common.sh@471 -- # waitforlisten 85025 00:24:57.265 14:07:36 -- common/autotest_common.sh@817 -- # '[' -z 85025 ']' 00:24:57.265 14:07:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.265 14:07:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:57.265 14:07:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.265 14:07:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:57.265 14:07:36 -- common/autotest_common.sh@10 -- # set +x 00:24:57.265 [2024-04-26 14:07:36.820350] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:57.265 [2024-04-26 14:07:36.820468] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.524 [2024-04-26 14:07:36.993013] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.782 [2024-04-26 14:07:37.223751] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.782 [2024-04-26 14:07:37.223811] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.782 [2024-04-26 14:07:37.223830] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.782 [2024-04-26 14:07:37.223852] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.782 [2024-04-26 14:07:37.223865] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.782 [2024-04-26 14:07:37.223913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.041 14:07:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:58.041 14:07:37 -- common/autotest_common.sh@850 -- # return 0 00:24:58.041 14:07:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:58.041 14:07:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:58.041 14:07:37 -- common/autotest_common.sh@10 -- # set +x 00:24:58.041 14:07:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.041 14:07:37 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:58.041 14:07:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:58.041 14:07:37 -- common/autotest_common.sh@10 -- # set +x 00:24:58.041 [2024-04-26 14:07:37.702232] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.041 [2024-04-26 14:07:37.710374] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:58.301 null0 00:24:58.301 [2024-04-26 14:07:37.742303] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.301 14:07:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:58.301 14:07:37 -- host/discovery_remove_ifc.sh@59 -- # hostpid=85085 00:24:58.301 14:07:37 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 85085 /tmp/host.sock 00:24:58.301 14:07:37 -- common/autotest_common.sh@817 -- # '[' -z 85085 ']' 00:24:58.301 14:07:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:24:58.301 14:07:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:58.301 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:58.301 14:07:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:58.301 14:07:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:58.301 14:07:37 -- common/autotest_common.sh@10 -- # set +x 00:24:58.301 14:07:37 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:58.301 [2024-04-26 14:07:37.857392] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:24:58.301 [2024-04-26 14:07:37.857513] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85085 ] 00:24:58.560 [2024-04-26 14:07:38.028494] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.820 [2024-04-26 14:07:38.263522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.079 14:07:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:59.079 14:07:38 -- common/autotest_common.sh@850 -- # return 0 00:24:59.079 14:07:38 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:59.079 14:07:38 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:59.079 14:07:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.079 14:07:38 -- common/autotest_common.sh@10 -- # set +x 00:24:59.079 14:07:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.079 14:07:38 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:59.079 14:07:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.079 14:07:38 -- common/autotest_common.sh@10 -- # set +x 00:24:59.647 14:07:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:59.647 14:07:39 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:59.647 14:07:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:59.647 14:07:39 -- common/autotest_common.sh@10 -- # set +x 00:25:00.633 [2024-04-26 14:07:40.072802] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:00.633 [2024-04-26 14:07:40.072859] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:00.633 [2024-04-26 14:07:40.072895] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:00.633 [2024-04-26 14:07:40.158844] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:00.633 [2024-04-26 14:07:40.223420] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:00.633 [2024-04-26 14:07:40.223507] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:00.633 [2024-04-26 14:07:40.223592] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:00.633 [2024-04-26 14:07:40.223618] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:00.633 [2024-04-26 14:07:40.223655] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:00.633 14:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.633 14:07:40 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:00.633 14:07:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:00.633 [2024-04-26 14:07:40.231392] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x614000006840 was disconnected and freed. delete nvme_qpair. 00:25:00.633 14:07:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.633 14:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.633 14:07:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:00.633 14:07:40 -- common/autotest_common.sh@10 -- # set +x 00:25:00.633 14:07:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:00.633 14:07:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:00.633 14:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.633 14:07:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:00.633 14:07:40 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:25:00.633 14:07:40 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:25:00.633 14:07:40 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:00.633 14:07:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:00.633 14:07:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.633 14:07:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:00.633 14:07:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:00.633 14:07:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:00.633 14:07:40 -- common/autotest_common.sh@10 -- # set +x 00:25:00.633 14:07:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:00.892 14:07:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:00.892 14:07:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:00.892 14:07:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:01.830 14:07:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:01.830 14:07:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.830 14:07:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:01.830 14:07:41 -- common/autotest_common.sh@10 -- # set +x 00:25:01.830 14:07:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:01.830 14:07:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:01.830 14:07:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:01.830 14:07:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:01.830 14:07:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:01.830 14:07:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:02.767 14:07:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:02.767 14:07:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.767 14:07:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:02.767 14:07:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:02.767 14:07:42 -- common/autotest_common.sh@10 -- # set +x 00:25:02.767 14:07:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:02.767 14:07:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:03.026 14:07:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.026 14:07:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:03.026 14:07:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:03.964 14:07:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:03.964 14:07:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:03.964 14:07:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:03.964 14:07:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:03.964 14:07:43 -- common/autotest_common.sh@10 -- # set +x 00:25:03.964 14:07:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:03.964 14:07:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:03.964 14:07:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:03.964 14:07:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:03.964 14:07:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:04.902 14:07:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:04.902 14:07:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.902 14:07:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:04.902 14:07:44 -- common/autotest_common.sh@10 -- # set +x 00:25:04.902 14:07:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:04.902 14:07:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:04.902 14:07:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:04.902 14:07:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:04.902 14:07:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:04.902 14:07:44 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:06.277 14:07:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:06.277 14:07:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:06.277 14:07:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:06.277 14:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.277 14:07:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:06.277 14:07:45 -- common/autotest_common.sh@10 -- # set +x 00:25:06.277 14:07:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:06.277 14:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.277 14:07:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:06.277 14:07:45 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:06.277 [2024-04-26 14:07:45.643054] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:06.277 [2024-04-26 14:07:45.643165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.277 [2024-04-26 14:07:45.643187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.277 [2024-04-26 14:07:45.643214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.277 [2024-04-26 14:07:45.643227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.277 [2024-04-26 14:07:45.643242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.277 [2024-04-26 14:07:45.643256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.277 [2024-04-26 14:07:45.643270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.277 [2024-04-26 14:07:45.643283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.277 [2024-04-26 14:07:45.643298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.277 [2024-04-26 14:07:45.643311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.277 [2024-04-26 14:07:45.643324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005640 is same with the state(5) to be set 00:25:06.277 [2024-04-26 14:07:45.653024] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005640 (9): Bad file descriptor 00:25:06.277 [2024-04-26 14:07:45.663037] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:07.212 14:07:46 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:07.212 14:07:46 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.212 14:07:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:07.212 14:07:46 -- common/autotest_common.sh@10 -- # set +x 00:25:07.212 14:07:46 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:07.212 14:07:46 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:07.212 14:07:46 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:07.212 [2024-04-26 14:07:46.729258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:08.150 [2024-04-26 14:07:47.753266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:08.150 [2024-04-26 14:07:47.753458] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005640 with addr=10.0.0.2, port=4420 00:25:08.150 [2024-04-26 14:07:47.753555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005640 is same with the state(5) to be set 00:25:08.150 [2024-04-26 14:07:47.755117] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005640 (9): Bad file descriptor 00:25:08.150 [2024-04-26 14:07:47.755292] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:08.150 [2024-04-26 14:07:47.755394] bdev_nvme.c:6674:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:08.150 [2024-04-26 14:07:47.755514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.150 [2024-04-26 14:07:47.755581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.150 [2024-04-26 14:07:47.755641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.150 [2024-04-26 14:07:47.755686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.150 [2024-04-26 14:07:47.755733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.150 [2024-04-26 14:07:47.755778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.150 [2024-04-26 14:07:47.755825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.150 [2024-04-26 14:07:47.755869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.150 [2024-04-26 14:07:47.755916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:08.150 [2024-04-26 14:07:47.755960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.150 [2024-04-26 14:07:47.756004] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:08.151 [2024-04-26 14:07:47.756068] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005240 (9): Bad file descriptor 00:25:08.151 [2024-04-26 14:07:47.756400] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:08.151 [2024-04-26 14:07:47.756475] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:08.151 14:07:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:08.151 14:07:47 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:08.151 14:07:47 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:09.528 14:07:48 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:09.528 14:07:48 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.528 14:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.528 14:07:48 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:09.528 14:07:48 -- common/autotest_common.sh@10 -- # set +x 00:25:09.528 14:07:48 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:09.528 14:07:48 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:09.528 14:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.528 14:07:48 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:09.528 14:07:48 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:09.528 14:07:48 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:09.528 14:07:48 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:09.528 14:07:48 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:09.528 14:07:48 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.528 14:07:48 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:09.528 14:07:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:09.528 14:07:48 -- common/autotest_common.sh@10 -- # set +x 00:25:09.528 14:07:48 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:09.528 14:07:48 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:09.528 14:07:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:09.528 14:07:48 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:09.528 14:07:48 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:10.096 [2024-04-26 14:07:49.758658] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:10.096 [2024-04-26 14:07:49.758716] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:10.096 [2024-04-26 14:07:49.758752] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:10.354 [2024-04-26 14:07:49.844698] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:10.354 [2024-04-26 14:07:49.901000] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:10.354 [2024-04-26 14:07:49.901086] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:10.354 [2024-04-26 14:07:49.901179] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:10.354 [2024-04-26 14:07:49.901206] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:10.354 [2024-04-26 14:07:49.901224] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:10.354 [2024-04-26 14:07:49.907654] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61400000a040 was disconnected and freed. delete nvme_qpair. 00:25:10.354 14:07:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:10.354 14:07:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.354 14:07:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:10.354 14:07:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:10.354 14:07:49 -- common/autotest_common.sh@10 -- # set +x 00:25:10.354 14:07:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:10.354 14:07:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:10.354 14:07:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:10.354 14:07:49 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:10.354 14:07:49 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:10.354 14:07:49 -- host/discovery_remove_ifc.sh@90 -- # killprocess 85085 00:25:10.354 14:07:49 -- common/autotest_common.sh@936 -- # '[' -z 85085 ']' 00:25:10.354 14:07:49 -- common/autotest_common.sh@940 -- # kill -0 85085 00:25:10.354 14:07:49 -- common/autotest_common.sh@941 -- # uname 00:25:10.354 14:07:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:10.354 14:07:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85085 00:25:10.354 killing process with pid 85085 00:25:10.354 14:07:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:10.354 14:07:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:10.354 14:07:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85085' 00:25:10.354 14:07:50 -- common/autotest_common.sh@955 -- # kill 85085 00:25:10.354 14:07:50 -- common/autotest_common.sh@960 -- # wait 85085 00:25:11.730 14:07:51 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:11.730 14:07:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:11.730 14:07:51 -- nvmf/common.sh@117 -- # sync 00:25:11.730 14:07:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:11.730 14:07:51 -- nvmf/common.sh@120 -- # set +e 00:25:11.730 14:07:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:11.730 14:07:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:11.730 rmmod nvme_tcp 00:25:11.730 rmmod nvme_fabrics 00:25:11.730 rmmod nvme_keyring 00:25:11.730 14:07:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:11.730 14:07:51 -- nvmf/common.sh@124 -- # set -e 00:25:11.730 14:07:51 -- nvmf/common.sh@125 -- # return 0 00:25:11.730 14:07:51 -- nvmf/common.sh@478 -- # '[' -n 85025 ']' 00:25:11.730 14:07:51 -- nvmf/common.sh@479 -- # killprocess 85025 00:25:11.730 14:07:51 -- common/autotest_common.sh@936 -- # '[' -z 85025 ']' 00:25:11.730 14:07:51 -- common/autotest_common.sh@940 -- # kill -0 85025 00:25:11.730 14:07:51 -- common/autotest_common.sh@941 -- # uname 00:25:11.730 14:07:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:11.730 14:07:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85025 00:25:11.988 killing process with pid 85025 00:25:11.988 14:07:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:11.988 14:07:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:11.988 14:07:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85025' 00:25:11.988 14:07:51 -- common/autotest_common.sh@955 -- # kill 85025 00:25:11.988 14:07:51 -- common/autotest_common.sh@960 -- # wait 85025 00:25:13.390 14:07:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:13.390 14:07:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:13.390 14:07:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:13.390 14:07:52 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:13.390 14:07:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:13.390 14:07:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.390 14:07:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.390 14:07:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.390 14:07:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:13.390 00:25:13.390 real 0m16.589s 00:25:13.390 user 0m27.157s 00:25:13.390 sys 0m2.275s 00:25:13.390 14:07:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:13.390 ************************************ 00:25:13.390 END TEST nvmf_discovery_remove_ifc 00:25:13.390 ************************************ 00:25:13.390 14:07:52 -- common/autotest_common.sh@10 -- # set +x 00:25:13.390 14:07:52 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:13.390 14:07:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:13.390 14:07:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:13.390 14:07:52 -- common/autotest_common.sh@10 -- # set +x 00:25:13.390 ************************************ 00:25:13.390 START TEST nvmf_identify_kernel_target 00:25:13.390 ************************************ 00:25:13.390 14:07:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:13.390 * Looking for test storage... 00:25:13.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:13.390 14:07:52 -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:13.390 14:07:52 -- nvmf/common.sh@7 -- # uname -s 00:25:13.390 14:07:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.390 14:07:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.390 14:07:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.390 14:07:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.390 14:07:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.390 14:07:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.390 14:07:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.390 14:07:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.390 14:07:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.390 14:07:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.390 14:07:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:25:13.390 14:07:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:25:13.390 14:07:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.390 14:07:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.390 14:07:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:13.390 14:07:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.390 14:07:53 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:13.390 14:07:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.390 14:07:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.390 14:07:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.391 14:07:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.391 14:07:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.391 14:07:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.391 14:07:53 -- paths/export.sh@5 -- # export PATH 00:25:13.391 14:07:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.391 14:07:53 -- nvmf/common.sh@47 -- # : 0 00:25:13.391 14:07:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:13.391 14:07:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:13.391 14:07:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.391 14:07:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.391 14:07:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.391 14:07:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:13.391 14:07:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:13.391 14:07:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:13.391 14:07:53 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:13.391 14:07:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:13.391 14:07:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.391 14:07:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:13.391 14:07:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:13.391 14:07:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:13.391 14:07:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.391 14:07:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.391 14:07:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.391 14:07:53 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:13.391 14:07:53 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:13.391 14:07:53 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:13.391 14:07:53 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:13.391 14:07:53 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:13.391 14:07:53 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:13.391 14:07:53 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.391 14:07:53 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.391 14:07:53 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:13.391 14:07:53 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:13.391 14:07:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:13.391 14:07:53 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:13.391 14:07:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:13.391 14:07:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.391 14:07:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:13.391 14:07:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:13.391 14:07:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:13.391 14:07:53 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:13.391 14:07:53 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:13.391 14:07:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:13.391 Cannot find device "nvmf_tgt_br" 00:25:13.391 14:07:53 -- nvmf/common.sh@155 -- # true 00:25:13.391 14:07:53 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:13.650 Cannot find device "nvmf_tgt_br2" 00:25:13.650 14:07:53 -- nvmf/common.sh@156 -- # true 00:25:13.650 14:07:53 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:13.650 14:07:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:13.650 Cannot find device "nvmf_tgt_br" 00:25:13.650 14:07:53 -- nvmf/common.sh@158 -- # true 00:25:13.650 14:07:53 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:13.650 Cannot find device "nvmf_tgt_br2" 00:25:13.650 14:07:53 -- nvmf/common.sh@159 -- # true 00:25:13.650 14:07:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:13.650 14:07:53 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:13.650 14:07:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:13.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:13.650 14:07:53 -- nvmf/common.sh@162 -- # true 00:25:13.650 14:07:53 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:13.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:13.650 14:07:53 -- nvmf/common.sh@163 -- # true 00:25:13.650 14:07:53 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:13.650 14:07:53 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:13.650 14:07:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:13.650 14:07:53 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:13.650 14:07:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:13.650 14:07:53 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:13.650 14:07:53 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:13.650 14:07:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:13.650 14:07:53 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:13.650 14:07:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:13.650 14:07:53 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:13.650 14:07:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:13.650 14:07:53 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:13.650 14:07:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:13.650 14:07:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:13.650 14:07:53 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:13.650 14:07:53 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:13.909 14:07:53 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:13.909 14:07:53 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:13.909 14:07:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:13.909 14:07:53 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:13.909 14:07:53 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:13.909 14:07:53 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:13.909 14:07:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:13.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:25:13.909 00:25:13.909 --- 10.0.0.2 ping statistics --- 00:25:13.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.909 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:25:13.909 14:07:53 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:13.909 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:13.909 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:25:13.909 00:25:13.909 --- 10.0.0.3 ping statistics --- 00:25:13.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.909 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:25:13.909 14:07:53 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:13.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:25:13.909 00:25:13.909 --- 10.0.0.1 ping statistics --- 00:25:13.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.909 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:25:13.909 14:07:53 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.909 14:07:53 -- nvmf/common.sh@422 -- # return 0 00:25:13.909 14:07:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:13.909 14:07:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.909 14:07:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:13.909 14:07:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:13.909 14:07:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.909 14:07:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:13.909 14:07:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:13.909 14:07:53 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:13.909 14:07:53 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:13.909 14:07:53 -- nvmf/common.sh@717 -- # local ip 00:25:13.909 14:07:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:13.909 14:07:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:13.909 14:07:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:13.909 14:07:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:13.909 14:07:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:13.909 14:07:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:13.909 14:07:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:13.909 14:07:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:13.909 14:07:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:13.909 14:07:53 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:13.909 14:07:53 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:13.909 14:07:53 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:13.909 14:07:53 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:25:13.909 14:07:53 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:13.909 14:07:53 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:13.909 14:07:53 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:13.909 14:07:53 -- nvmf/common.sh@628 -- # local block nvme 00:25:13.909 14:07:53 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:25:13.909 14:07:53 -- nvmf/common.sh@631 -- # modprobe nvmet 00:25:13.909 14:07:53 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:13.909 14:07:53 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:14.476 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:14.476 Waiting for block devices as requested 00:25:14.476 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:14.742 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:14.742 14:07:54 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:14.742 14:07:54 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:14.743 14:07:54 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:25:14.743 14:07:54 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:14.743 14:07:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:14.743 14:07:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:14.743 14:07:54 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:25:14.743 14:07:54 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:14.743 14:07:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:14.743 No valid GPT data, bailing 00:25:14.743 14:07:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:14.743 14:07:54 -- scripts/common.sh@391 -- # pt= 00:25:14.743 14:07:54 -- scripts/common.sh@392 -- # return 1 00:25:14.743 14:07:54 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:25:14.743 14:07:54 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:14.743 14:07:54 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:14.743 14:07:54 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:25:14.743 14:07:54 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:25:14.743 14:07:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:14.743 14:07:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:14.743 14:07:54 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:25:14.743 14:07:54 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:25:14.743 14:07:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:14.743 No valid GPT data, bailing 00:25:14.743 14:07:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:15.004 14:07:54 -- scripts/common.sh@391 -- # pt= 00:25:15.004 14:07:54 -- scripts/common.sh@392 -- # return 1 00:25:15.004 14:07:54 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:25:15.004 14:07:54 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:15.004 14:07:54 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:15.004 14:07:54 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:25:15.004 14:07:54 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:25:15.004 14:07:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:15.004 14:07:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:15.004 14:07:54 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:25:15.004 14:07:54 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:25:15.004 14:07:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:15.004 No valid GPT data, bailing 00:25:15.004 14:07:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:15.004 14:07:54 -- scripts/common.sh@391 -- # pt= 00:25:15.004 14:07:54 -- scripts/common.sh@392 -- # return 1 00:25:15.004 14:07:54 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:25:15.004 14:07:54 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:15.004 14:07:54 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:15.004 14:07:54 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:25:15.004 14:07:54 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:25:15.004 14:07:54 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:15.004 14:07:54 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:15.004 14:07:54 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:25:15.004 14:07:54 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:15.004 14:07:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:15.004 No valid GPT data, bailing 00:25:15.004 14:07:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:15.004 14:07:54 -- scripts/common.sh@391 -- # pt= 00:25:15.004 14:07:54 -- scripts/common.sh@392 -- # return 1 00:25:15.004 14:07:54 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:25:15.004 14:07:54 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:25:15.004 14:07:54 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:15.004 14:07:54 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:15.004 14:07:54 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:15.004 14:07:54 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:15.004 14:07:54 -- nvmf/common.sh@656 -- # echo 1 00:25:15.004 14:07:54 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:25:15.004 14:07:54 -- nvmf/common.sh@658 -- # echo 1 00:25:15.004 14:07:54 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:25:15.004 14:07:54 -- nvmf/common.sh@661 -- # echo tcp 00:25:15.004 14:07:54 -- nvmf/common.sh@662 -- # echo 4420 00:25:15.004 14:07:54 -- nvmf/common.sh@663 -- # echo ipv4 00:25:15.004 14:07:54 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:15.004 14:07:54 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -a 10.0.0.1 -t tcp -s 4420 00:25:15.004 00:25:15.004 Discovery Log Number of Records 2, Generation counter 2 00:25:15.004 =====Discovery Log Entry 0====== 00:25:15.004 trtype: tcp 00:25:15.004 adrfam: ipv4 00:25:15.004 subtype: current discovery subsystem 00:25:15.004 treq: not specified, sq flow control disable supported 00:25:15.004 portid: 1 00:25:15.004 trsvcid: 4420 00:25:15.004 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:15.004 traddr: 10.0.0.1 00:25:15.004 eflags: none 00:25:15.004 sectype: none 00:25:15.004 =====Discovery Log Entry 1====== 00:25:15.004 trtype: tcp 00:25:15.004 adrfam: ipv4 00:25:15.004 subtype: nvme subsystem 00:25:15.004 treq: not specified, sq flow control disable supported 00:25:15.004 portid: 1 00:25:15.004 trsvcid: 4420 00:25:15.004 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:15.004 traddr: 10.0.0.1 00:25:15.004 eflags: none 00:25:15.004 sectype: none 00:25:15.004 14:07:54 -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:15.004 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:15.264 ===================================================== 00:25:15.264 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:15.264 ===================================================== 00:25:15.264 Controller Capabilities/Features 00:25:15.264 ================================ 00:25:15.264 Vendor ID: 0000 00:25:15.264 Subsystem Vendor ID: 0000 00:25:15.264 Serial Number: 949e65c1b121a3035395 00:25:15.264 Model Number: Linux 00:25:15.264 Firmware Version: 6.7.0-68 00:25:15.264 Recommended Arb Burst: 0 00:25:15.264 IEEE OUI Identifier: 00 00 00 00:25:15.264 Multi-path I/O 00:25:15.264 May have multiple subsystem ports: No 00:25:15.264 May have multiple controllers: No 00:25:15.264 Associated with SR-IOV VF: No 00:25:15.264 Max Data Transfer Size: Unlimited 00:25:15.264 Max Number of Namespaces: 0 00:25:15.264 Max Number of I/O Queues: 1024 00:25:15.264 NVMe Specification Version (VS): 1.3 00:25:15.264 NVMe Specification Version (Identify): 1.3 00:25:15.264 Maximum Queue Entries: 1024 00:25:15.264 Contiguous Queues Required: No 00:25:15.264 Arbitration Mechanisms Supported 00:25:15.264 Weighted Round Robin: Not Supported 00:25:15.264 Vendor Specific: Not Supported 00:25:15.264 Reset Timeout: 7500 ms 00:25:15.264 Doorbell Stride: 4 bytes 00:25:15.264 NVM Subsystem Reset: Not Supported 00:25:15.264 Command Sets Supported 00:25:15.264 NVM Command Set: Supported 00:25:15.264 Boot Partition: Not Supported 00:25:15.264 Memory Page Size Minimum: 4096 bytes 00:25:15.264 Memory Page Size Maximum: 4096 bytes 00:25:15.264 Persistent Memory Region: Not Supported 00:25:15.264 Optional Asynchronous Events Supported 00:25:15.264 Namespace Attribute Notices: Not Supported 00:25:15.264 Firmware Activation Notices: Not Supported 00:25:15.264 ANA Change Notices: Not Supported 00:25:15.264 PLE Aggregate Log Change Notices: Not Supported 00:25:15.264 LBA Status Info Alert Notices: Not Supported 00:25:15.264 EGE Aggregate Log Change Notices: Not Supported 00:25:15.264 Normal NVM Subsystem Shutdown event: Not Supported 00:25:15.264 Zone Descriptor Change Notices: Not Supported 00:25:15.264 Discovery Log Change Notices: Supported 00:25:15.264 Controller Attributes 00:25:15.264 128-bit Host Identifier: Not Supported 00:25:15.264 Non-Operational Permissive Mode: Not Supported 00:25:15.264 NVM Sets: Not Supported 00:25:15.264 Read Recovery Levels: Not Supported 00:25:15.264 Endurance Groups: Not Supported 00:25:15.264 Predictable Latency Mode: Not Supported 00:25:15.264 Traffic Based Keep ALive: Not Supported 00:25:15.264 Namespace Granularity: Not Supported 00:25:15.264 SQ Associations: Not Supported 00:25:15.264 UUID List: Not Supported 00:25:15.264 Multi-Domain Subsystem: Not Supported 00:25:15.264 Fixed Capacity Management: Not Supported 00:25:15.264 Variable Capacity Management: Not Supported 00:25:15.264 Delete Endurance Group: Not Supported 00:25:15.264 Delete NVM Set: Not Supported 00:25:15.264 Extended LBA Formats Supported: Not Supported 00:25:15.264 Flexible Data Placement Supported: Not Supported 00:25:15.264 00:25:15.264 Controller Memory Buffer Support 00:25:15.264 ================================ 00:25:15.264 Supported: No 00:25:15.264 00:25:15.264 Persistent Memory Region Support 00:25:15.264 ================================ 00:25:15.264 Supported: No 00:25:15.264 00:25:15.264 Admin Command Set Attributes 00:25:15.264 ============================ 00:25:15.264 Security Send/Receive: Not Supported 00:25:15.264 Format NVM: Not Supported 00:25:15.264 Firmware Activate/Download: Not Supported 00:25:15.264 Namespace Management: Not Supported 00:25:15.264 Device Self-Test: Not Supported 00:25:15.264 Directives: Not Supported 00:25:15.264 NVMe-MI: Not Supported 00:25:15.264 Virtualization Management: Not Supported 00:25:15.264 Doorbell Buffer Config: Not Supported 00:25:15.264 Get LBA Status Capability: Not Supported 00:25:15.264 Command & Feature Lockdown Capability: Not Supported 00:25:15.264 Abort Command Limit: 1 00:25:15.264 Async Event Request Limit: 1 00:25:15.264 Number of Firmware Slots: N/A 00:25:15.264 Firmware Slot 1 Read-Only: N/A 00:25:15.264 Firmware Activation Without Reset: N/A 00:25:15.264 Multiple Update Detection Support: N/A 00:25:15.264 Firmware Update Granularity: No Information Provided 00:25:15.264 Per-Namespace SMART Log: No 00:25:15.264 Asymmetric Namespace Access Log Page: Not Supported 00:25:15.264 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:15.264 Command Effects Log Page: Not Supported 00:25:15.264 Get Log Page Extended Data: Supported 00:25:15.264 Telemetry Log Pages: Not Supported 00:25:15.264 Persistent Event Log Pages: Not Supported 00:25:15.264 Supported Log Pages Log Page: May Support 00:25:15.264 Commands Supported & Effects Log Page: Not Supported 00:25:15.264 Feature Identifiers & Effects Log Page:May Support 00:25:15.264 NVMe-MI Commands & Effects Log Page: May Support 00:25:15.264 Data Area 4 for Telemetry Log: Not Supported 00:25:15.264 Error Log Page Entries Supported: 1 00:25:15.264 Keep Alive: Not Supported 00:25:15.264 00:25:15.264 NVM Command Set Attributes 00:25:15.264 ========================== 00:25:15.264 Submission Queue Entry Size 00:25:15.264 Max: 1 00:25:15.264 Min: 1 00:25:15.264 Completion Queue Entry Size 00:25:15.264 Max: 1 00:25:15.264 Min: 1 00:25:15.264 Number of Namespaces: 0 00:25:15.264 Compare Command: Not Supported 00:25:15.264 Write Uncorrectable Command: Not Supported 00:25:15.264 Dataset Management Command: Not Supported 00:25:15.264 Write Zeroes Command: Not Supported 00:25:15.264 Set Features Save Field: Not Supported 00:25:15.264 Reservations: Not Supported 00:25:15.264 Timestamp: Not Supported 00:25:15.264 Copy: Not Supported 00:25:15.264 Volatile Write Cache: Not Present 00:25:15.264 Atomic Write Unit (Normal): 1 00:25:15.264 Atomic Write Unit (PFail): 1 00:25:15.264 Atomic Compare & Write Unit: 1 00:25:15.264 Fused Compare & Write: Not Supported 00:25:15.264 Scatter-Gather List 00:25:15.264 SGL Command Set: Supported 00:25:15.264 SGL Keyed: Not Supported 00:25:15.264 SGL Bit Bucket Descriptor: Not Supported 00:25:15.264 SGL Metadata Pointer: Not Supported 00:25:15.264 Oversized SGL: Not Supported 00:25:15.264 SGL Metadata Address: Not Supported 00:25:15.264 SGL Offset: Supported 00:25:15.264 Transport SGL Data Block: Not Supported 00:25:15.264 Replay Protected Memory Block: Not Supported 00:25:15.264 00:25:15.264 Firmware Slot Information 00:25:15.264 ========================= 00:25:15.264 Active slot: 0 00:25:15.264 00:25:15.264 00:25:15.264 Error Log 00:25:15.264 ========= 00:25:15.264 00:25:15.264 Active Namespaces 00:25:15.264 ================= 00:25:15.264 Discovery Log Page 00:25:15.264 ================== 00:25:15.264 Generation Counter: 2 00:25:15.264 Number of Records: 2 00:25:15.264 Record Format: 0 00:25:15.264 00:25:15.264 Discovery Log Entry 0 00:25:15.264 ---------------------- 00:25:15.264 Transport Type: 3 (TCP) 00:25:15.264 Address Family: 1 (IPv4) 00:25:15.264 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:15.264 Entry Flags: 00:25:15.265 Duplicate Returned Information: 0 00:25:15.265 Explicit Persistent Connection Support for Discovery: 0 00:25:15.265 Transport Requirements: 00:25:15.265 Secure Channel: Not Specified 00:25:15.265 Port ID: 1 (0x0001) 00:25:15.265 Controller ID: 65535 (0xffff) 00:25:15.265 Admin Max SQ Size: 32 00:25:15.265 Transport Service Identifier: 4420 00:25:15.265 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:15.265 Transport Address: 10.0.0.1 00:25:15.265 Discovery Log Entry 1 00:25:15.265 ---------------------- 00:25:15.265 Transport Type: 3 (TCP) 00:25:15.265 Address Family: 1 (IPv4) 00:25:15.265 Subsystem Type: 2 (NVM Subsystem) 00:25:15.265 Entry Flags: 00:25:15.265 Duplicate Returned Information: 0 00:25:15.265 Explicit Persistent Connection Support for Discovery: 0 00:25:15.265 Transport Requirements: 00:25:15.265 Secure Channel: Not Specified 00:25:15.265 Port ID: 1 (0x0001) 00:25:15.265 Controller ID: 65535 (0xffff) 00:25:15.265 Admin Max SQ Size: 32 00:25:15.265 Transport Service Identifier: 4420 00:25:15.265 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:15.265 Transport Address: 10.0.0.1 00:25:15.265 14:07:54 -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:15.524 get_feature(0x01) failed 00:25:15.524 get_feature(0x02) failed 00:25:15.524 get_feature(0x04) failed 00:25:15.524 ===================================================== 00:25:15.524 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:15.524 ===================================================== 00:25:15.524 Controller Capabilities/Features 00:25:15.524 ================================ 00:25:15.524 Vendor ID: 0000 00:25:15.524 Subsystem Vendor ID: 0000 00:25:15.524 Serial Number: bb2b377c940aa0ab3dd4 00:25:15.524 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:15.524 Firmware Version: 6.7.0-68 00:25:15.524 Recommended Arb Burst: 6 00:25:15.524 IEEE OUI Identifier: 00 00 00 00:25:15.524 Multi-path I/O 00:25:15.524 May have multiple subsystem ports: Yes 00:25:15.524 May have multiple controllers: Yes 00:25:15.524 Associated with SR-IOV VF: No 00:25:15.524 Max Data Transfer Size: Unlimited 00:25:15.524 Max Number of Namespaces: 1024 00:25:15.524 Max Number of I/O Queues: 128 00:25:15.524 NVMe Specification Version (VS): 1.3 00:25:15.524 NVMe Specification Version (Identify): 1.3 00:25:15.524 Maximum Queue Entries: 1024 00:25:15.525 Contiguous Queues Required: No 00:25:15.525 Arbitration Mechanisms Supported 00:25:15.525 Weighted Round Robin: Not Supported 00:25:15.525 Vendor Specific: Not Supported 00:25:15.525 Reset Timeout: 7500 ms 00:25:15.525 Doorbell Stride: 4 bytes 00:25:15.525 NVM Subsystem Reset: Not Supported 00:25:15.525 Command Sets Supported 00:25:15.525 NVM Command Set: Supported 00:25:15.525 Boot Partition: Not Supported 00:25:15.525 Memory Page Size Minimum: 4096 bytes 00:25:15.525 Memory Page Size Maximum: 4096 bytes 00:25:15.525 Persistent Memory Region: Not Supported 00:25:15.525 Optional Asynchronous Events Supported 00:25:15.525 Namespace Attribute Notices: Supported 00:25:15.525 Firmware Activation Notices: Not Supported 00:25:15.525 ANA Change Notices: Supported 00:25:15.525 PLE Aggregate Log Change Notices: Not Supported 00:25:15.525 LBA Status Info Alert Notices: Not Supported 00:25:15.525 EGE Aggregate Log Change Notices: Not Supported 00:25:15.525 Normal NVM Subsystem Shutdown event: Not Supported 00:25:15.525 Zone Descriptor Change Notices: Not Supported 00:25:15.525 Discovery Log Change Notices: Not Supported 00:25:15.525 Controller Attributes 00:25:15.525 128-bit Host Identifier: Supported 00:25:15.525 Non-Operational Permissive Mode: Not Supported 00:25:15.525 NVM Sets: Not Supported 00:25:15.525 Read Recovery Levels: Not Supported 00:25:15.525 Endurance Groups: Not Supported 00:25:15.525 Predictable Latency Mode: Not Supported 00:25:15.525 Traffic Based Keep ALive: Supported 00:25:15.525 Namespace Granularity: Not Supported 00:25:15.525 SQ Associations: Not Supported 00:25:15.525 UUID List: Not Supported 00:25:15.525 Multi-Domain Subsystem: Not Supported 00:25:15.525 Fixed Capacity Management: Not Supported 00:25:15.525 Variable Capacity Management: Not Supported 00:25:15.525 Delete Endurance Group: Not Supported 00:25:15.525 Delete NVM Set: Not Supported 00:25:15.525 Extended LBA Formats Supported: Not Supported 00:25:15.525 Flexible Data Placement Supported: Not Supported 00:25:15.525 00:25:15.525 Controller Memory Buffer Support 00:25:15.525 ================================ 00:25:15.525 Supported: No 00:25:15.525 00:25:15.525 Persistent Memory Region Support 00:25:15.525 ================================ 00:25:15.525 Supported: No 00:25:15.525 00:25:15.525 Admin Command Set Attributes 00:25:15.525 ============================ 00:25:15.525 Security Send/Receive: Not Supported 00:25:15.525 Format NVM: Not Supported 00:25:15.525 Firmware Activate/Download: Not Supported 00:25:15.525 Namespace Management: Not Supported 00:25:15.525 Device Self-Test: Not Supported 00:25:15.525 Directives: Not Supported 00:25:15.525 NVMe-MI: Not Supported 00:25:15.525 Virtualization Management: Not Supported 00:25:15.525 Doorbell Buffer Config: Not Supported 00:25:15.525 Get LBA Status Capability: Not Supported 00:25:15.525 Command & Feature Lockdown Capability: Not Supported 00:25:15.525 Abort Command Limit: 4 00:25:15.525 Async Event Request Limit: 4 00:25:15.525 Number of Firmware Slots: N/A 00:25:15.525 Firmware Slot 1 Read-Only: N/A 00:25:15.525 Firmware Activation Without Reset: N/A 00:25:15.525 Multiple Update Detection Support: N/A 00:25:15.525 Firmware Update Granularity: No Information Provided 00:25:15.525 Per-Namespace SMART Log: Yes 00:25:15.525 Asymmetric Namespace Access Log Page: Supported 00:25:15.525 ANA Transition Time : 10 sec 00:25:15.525 00:25:15.525 Asymmetric Namespace Access Capabilities 00:25:15.525 ANA Optimized State : Supported 00:25:15.525 ANA Non-Optimized State : Supported 00:25:15.525 ANA Inaccessible State : Supported 00:25:15.525 ANA Persistent Loss State : Supported 00:25:15.525 ANA Change State : Supported 00:25:15.525 ANAGRPID is not changed : No 00:25:15.525 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:15.525 00:25:15.525 ANA Group Identifier Maximum : 128 00:25:15.525 Number of ANA Group Identifiers : 128 00:25:15.525 Max Number of Allowed Namespaces : 1024 00:25:15.525 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:15.525 Command Effects Log Page: Supported 00:25:15.525 Get Log Page Extended Data: Supported 00:25:15.525 Telemetry Log Pages: Not Supported 00:25:15.525 Persistent Event Log Pages: Not Supported 00:25:15.525 Supported Log Pages Log Page: May Support 00:25:15.525 Commands Supported & Effects Log Page: Not Supported 00:25:15.525 Feature Identifiers & Effects Log Page:May Support 00:25:15.525 NVMe-MI Commands & Effects Log Page: May Support 00:25:15.525 Data Area 4 for Telemetry Log: Not Supported 00:25:15.525 Error Log Page Entries Supported: 128 00:25:15.525 Keep Alive: Supported 00:25:15.525 Keep Alive Granularity: 1000 ms 00:25:15.525 00:25:15.525 NVM Command Set Attributes 00:25:15.525 ========================== 00:25:15.525 Submission Queue Entry Size 00:25:15.525 Max: 64 00:25:15.525 Min: 64 00:25:15.525 Completion Queue Entry Size 00:25:15.525 Max: 16 00:25:15.525 Min: 16 00:25:15.525 Number of Namespaces: 1024 00:25:15.525 Compare Command: Not Supported 00:25:15.525 Write Uncorrectable Command: Not Supported 00:25:15.525 Dataset Management Command: Supported 00:25:15.525 Write Zeroes Command: Supported 00:25:15.525 Set Features Save Field: Not Supported 00:25:15.525 Reservations: Not Supported 00:25:15.525 Timestamp: Not Supported 00:25:15.525 Copy: Not Supported 00:25:15.525 Volatile Write Cache: Present 00:25:15.525 Atomic Write Unit (Normal): 1 00:25:15.525 Atomic Write Unit (PFail): 1 00:25:15.525 Atomic Compare & Write Unit: 1 00:25:15.525 Fused Compare & Write: Not Supported 00:25:15.525 Scatter-Gather List 00:25:15.525 SGL Command Set: Supported 00:25:15.525 SGL Keyed: Not Supported 00:25:15.525 SGL Bit Bucket Descriptor: Not Supported 00:25:15.525 SGL Metadata Pointer: Not Supported 00:25:15.525 Oversized SGL: Not Supported 00:25:15.525 SGL Metadata Address: Not Supported 00:25:15.525 SGL Offset: Supported 00:25:15.525 Transport SGL Data Block: Not Supported 00:25:15.525 Replay Protected Memory Block: Not Supported 00:25:15.525 00:25:15.525 Firmware Slot Information 00:25:15.525 ========================= 00:25:15.525 Active slot: 0 00:25:15.525 00:25:15.525 Asymmetric Namespace Access 00:25:15.525 =========================== 00:25:15.525 Change Count : 0 00:25:15.525 Number of ANA Group Descriptors : 1 00:25:15.525 ANA Group Descriptor : 0 00:25:15.525 ANA Group ID : 1 00:25:15.525 Number of NSID Values : 1 00:25:15.525 Change Count : 0 00:25:15.525 ANA State : 1 00:25:15.525 Namespace Identifier : 1 00:25:15.525 00:25:15.525 Commands Supported and Effects 00:25:15.525 ============================== 00:25:15.525 Admin Commands 00:25:15.525 -------------- 00:25:15.525 Get Log Page (02h): Supported 00:25:15.525 Identify (06h): Supported 00:25:15.525 Abort (08h): Supported 00:25:15.525 Set Features (09h): Supported 00:25:15.525 Get Features (0Ah): Supported 00:25:15.525 Asynchronous Event Request (0Ch): Supported 00:25:15.525 Keep Alive (18h): Supported 00:25:15.525 I/O Commands 00:25:15.525 ------------ 00:25:15.525 Flush (00h): Supported 00:25:15.525 Write (01h): Supported LBA-Change 00:25:15.525 Read (02h): Supported 00:25:15.525 Write Zeroes (08h): Supported LBA-Change 00:25:15.525 Dataset Management (09h): Supported 00:25:15.525 00:25:15.525 Error Log 00:25:15.525 ========= 00:25:15.525 Entry: 0 00:25:15.525 Error Count: 0x3 00:25:15.525 Submission Queue Id: 0x0 00:25:15.525 Command Id: 0x5 00:25:15.525 Phase Bit: 0 00:25:15.525 Status Code: 0x2 00:25:15.525 Status Code Type: 0x0 00:25:15.525 Do Not Retry: 1 00:25:15.525 Error Location: 0x28 00:25:15.525 LBA: 0x0 00:25:15.525 Namespace: 0x0 00:25:15.525 Vendor Log Page: 0x0 00:25:15.525 ----------- 00:25:15.525 Entry: 1 00:25:15.525 Error Count: 0x2 00:25:15.525 Submission Queue Id: 0x0 00:25:15.525 Command Id: 0x5 00:25:15.525 Phase Bit: 0 00:25:15.525 Status Code: 0x2 00:25:15.525 Status Code Type: 0x0 00:25:15.525 Do Not Retry: 1 00:25:15.525 Error Location: 0x28 00:25:15.525 LBA: 0x0 00:25:15.525 Namespace: 0x0 00:25:15.525 Vendor Log Page: 0x0 00:25:15.525 ----------- 00:25:15.525 Entry: 2 00:25:15.525 Error Count: 0x1 00:25:15.525 Submission Queue Id: 0x0 00:25:15.525 Command Id: 0x4 00:25:15.525 Phase Bit: 0 00:25:15.525 Status Code: 0x2 00:25:15.525 Status Code Type: 0x0 00:25:15.525 Do Not Retry: 1 00:25:15.525 Error Location: 0x28 00:25:15.525 LBA: 0x0 00:25:15.525 Namespace: 0x0 00:25:15.525 Vendor Log Page: 0x0 00:25:15.525 00:25:15.525 Number of Queues 00:25:15.525 ================ 00:25:15.525 Number of I/O Submission Queues: 128 00:25:15.525 Number of I/O Completion Queues: 128 00:25:15.525 00:25:15.525 ZNS Specific Controller Data 00:25:15.525 ============================ 00:25:15.525 Zone Append Size Limit: 0 00:25:15.525 00:25:15.525 00:25:15.525 Active Namespaces 00:25:15.525 ================= 00:25:15.525 get_feature(0x05) failed 00:25:15.525 Namespace ID:1 00:25:15.525 Command Set Identifier: NVM (00h) 00:25:15.525 Deallocate: Supported 00:25:15.525 Deallocated/Unwritten Error: Not Supported 00:25:15.525 Deallocated Read Value: Unknown 00:25:15.525 Deallocate in Write Zeroes: Not Supported 00:25:15.525 Deallocated Guard Field: 0xFFFF 00:25:15.525 Flush: Supported 00:25:15.525 Reservation: Not Supported 00:25:15.525 Namespace Sharing Capabilities: Multiple Controllers 00:25:15.525 Size (in LBAs): 1310720 (5GiB) 00:25:15.525 Capacity (in LBAs): 1310720 (5GiB) 00:25:15.525 Utilization (in LBAs): 1310720 (5GiB) 00:25:15.525 UUID: 82f90b5a-9e7f-4591-ad32-468eefbfeb8d 00:25:15.525 Thin Provisioning: Not Supported 00:25:15.525 Per-NS Atomic Units: Yes 00:25:15.525 Atomic Boundary Size (Normal): 0 00:25:15.525 Atomic Boundary Size (PFail): 0 00:25:15.525 Atomic Boundary Offset: 0 00:25:15.525 NGUID/EUI64 Never Reused: No 00:25:15.525 ANA group ID: 1 00:25:15.525 Namespace Write Protected: No 00:25:15.525 Number of LBA Formats: 1 00:25:15.525 Current LBA Format: LBA Format #00 00:25:15.525 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:25:15.525 00:25:15.525 14:07:55 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:15.525 14:07:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:15.525 14:07:55 -- nvmf/common.sh@117 -- # sync 00:25:15.784 14:07:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:15.784 14:07:55 -- nvmf/common.sh@120 -- # set +e 00:25:15.784 14:07:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:15.784 14:07:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:15.784 rmmod nvme_tcp 00:25:15.784 rmmod nvme_fabrics 00:25:15.784 14:07:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:15.784 14:07:55 -- nvmf/common.sh@124 -- # set -e 00:25:15.784 14:07:55 -- nvmf/common.sh@125 -- # return 0 00:25:15.784 14:07:55 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:25:15.784 14:07:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:15.784 14:07:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:15.784 14:07:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:15.784 14:07:55 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:15.784 14:07:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:15.784 14:07:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.784 14:07:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:15.784 14:07:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.784 14:07:55 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:15.784 14:07:55 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:15.784 14:07:55 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:15.784 14:07:55 -- nvmf/common.sh@675 -- # echo 0 00:25:15.784 14:07:55 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:15.784 14:07:55 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:15.784 14:07:55 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:15.784 14:07:55 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:15.784 14:07:55 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:25:15.784 14:07:55 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:25:15.784 14:07:55 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:16.722 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:16.722 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:16.722 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:16.722 00:25:16.722 real 0m3.495s 00:25:16.722 user 0m1.129s 00:25:16.722 sys 0m1.943s 00:25:16.722 14:07:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:16.722 14:07:56 -- common/autotest_common.sh@10 -- # set +x 00:25:16.722 ************************************ 00:25:16.722 END TEST nvmf_identify_kernel_target 00:25:16.722 ************************************ 00:25:16.981 14:07:56 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:16.981 14:07:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:16.981 14:07:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:16.981 14:07:56 -- common/autotest_common.sh@10 -- # set +x 00:25:16.981 ************************************ 00:25:16.981 START TEST nvmf_auth 00:25:16.981 ************************************ 00:25:16.981 14:07:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:16.981 * Looking for test storage... 00:25:17.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:17.241 14:07:56 -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:17.241 14:07:56 -- nvmf/common.sh@7 -- # uname -s 00:25:17.241 14:07:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:17.241 14:07:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:17.241 14:07:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:17.241 14:07:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:17.241 14:07:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:17.241 14:07:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:17.241 14:07:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:17.241 14:07:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:17.241 14:07:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:17.241 14:07:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:17.241 14:07:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:25:17.241 14:07:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:25:17.241 14:07:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:17.241 14:07:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:17.241 14:07:56 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:17.241 14:07:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:17.241 14:07:56 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:17.241 14:07:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:17.241 14:07:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:17.241 14:07:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:17.241 14:07:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.241 14:07:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.241 14:07:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.241 14:07:56 -- paths/export.sh@5 -- # export PATH 00:25:17.242 14:07:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.242 14:07:56 -- nvmf/common.sh@47 -- # : 0 00:25:17.242 14:07:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:17.242 14:07:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:17.242 14:07:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:17.242 14:07:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:17.242 14:07:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:17.242 14:07:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:17.242 14:07:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:17.242 14:07:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:17.242 14:07:56 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:17.242 14:07:56 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:17.242 14:07:56 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:17.242 14:07:56 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:17.242 14:07:56 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:17.242 14:07:56 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:17.242 14:07:56 -- host/auth.sh@21 -- # keys=() 00:25:17.242 14:07:56 -- host/auth.sh@77 -- # nvmftestinit 00:25:17.242 14:07:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:17.242 14:07:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:17.242 14:07:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:17.242 14:07:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:17.242 14:07:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:17.242 14:07:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.242 14:07:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:17.242 14:07:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.242 14:07:56 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:17.242 14:07:56 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:17.242 14:07:56 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:17.242 14:07:56 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:17.242 14:07:56 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:17.242 14:07:56 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:17.242 14:07:56 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.242 14:07:56 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.242 14:07:56 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:17.242 14:07:56 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:17.242 14:07:56 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:17.242 14:07:56 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:17.242 14:07:56 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:17.242 14:07:56 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.242 14:07:56 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:17.242 14:07:56 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:17.242 14:07:56 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:17.242 14:07:56 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:17.242 14:07:56 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:17.242 14:07:56 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:17.242 Cannot find device "nvmf_tgt_br" 00:25:17.242 14:07:56 -- nvmf/common.sh@155 -- # true 00:25:17.242 14:07:56 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:17.242 Cannot find device "nvmf_tgt_br2" 00:25:17.242 14:07:56 -- nvmf/common.sh@156 -- # true 00:25:17.242 14:07:56 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:17.242 14:07:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:17.242 Cannot find device "nvmf_tgt_br" 00:25:17.242 14:07:56 -- nvmf/common.sh@158 -- # true 00:25:17.242 14:07:56 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:17.242 Cannot find device "nvmf_tgt_br2" 00:25:17.242 14:07:56 -- nvmf/common.sh@159 -- # true 00:25:17.242 14:07:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:17.242 14:07:56 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:17.242 14:07:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:17.242 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:17.242 14:07:56 -- nvmf/common.sh@162 -- # true 00:25:17.242 14:07:56 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:17.242 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:17.242 14:07:56 -- nvmf/common.sh@163 -- # true 00:25:17.242 14:07:56 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:17.242 14:07:56 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:17.242 14:07:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:17.242 14:07:56 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:17.501 14:07:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:17.501 14:07:56 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:17.501 14:07:56 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:17.501 14:07:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:17.501 14:07:56 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:17.501 14:07:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:17.502 14:07:57 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:17.502 14:07:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:17.502 14:07:57 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:17.502 14:07:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:17.502 14:07:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:17.502 14:07:57 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:17.502 14:07:57 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:17.502 14:07:57 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:17.502 14:07:57 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:17.502 14:07:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:17.502 14:07:57 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:17.502 14:07:57 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:17.502 14:07:57 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:17.502 14:07:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:17.502 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.502 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:25:17.502 00:25:17.502 --- 10.0.0.2 ping statistics --- 00:25:17.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.502 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:25:17.502 14:07:57 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:17.502 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:17.502 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:25:17.502 00:25:17.502 --- 10.0.0.3 ping statistics --- 00:25:17.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.502 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:25:17.502 14:07:57 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:17.502 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.502 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:25:17.502 00:25:17.502 --- 10.0.0.1 ping statistics --- 00:25:17.502 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.502 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:25:17.502 14:07:57 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.502 14:07:57 -- nvmf/common.sh@422 -- # return 0 00:25:17.502 14:07:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:17.502 14:07:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.502 14:07:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:17.502 14:07:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:17.502 14:07:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.502 14:07:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:17.502 14:07:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:17.502 14:07:57 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:25:17.502 14:07:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:17.502 14:07:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:17.502 14:07:57 -- common/autotest_common.sh@10 -- # set +x 00:25:17.761 14:07:57 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:17.761 14:07:57 -- nvmf/common.sh@470 -- # nvmfpid=86026 00:25:17.761 14:07:57 -- nvmf/common.sh@471 -- # waitforlisten 86026 00:25:17.761 14:07:57 -- common/autotest_common.sh@817 -- # '[' -z 86026 ']' 00:25:17.761 14:07:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.761 14:07:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:17.761 14:07:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.761 14:07:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:17.761 14:07:57 -- common/autotest_common.sh@10 -- # set +x 00:25:18.699 14:07:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:18.700 14:07:58 -- common/autotest_common.sh@850 -- # return 0 00:25:18.700 14:07:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:18.700 14:07:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:18.700 14:07:58 -- common/autotest_common.sh@10 -- # set +x 00:25:18.700 14:07:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.700 14:07:58 -- host/auth.sh@79 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:18.700 14:07:58 -- host/auth.sh@81 -- # gen_key null 32 00:25:18.700 14:07:58 -- host/auth.sh@53 -- # local digest len file key 00:25:18.700 14:07:58 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:18.700 14:07:58 -- host/auth.sh@54 -- # local -A digests 00:25:18.700 14:07:58 -- host/auth.sh@56 -- # digest=null 00:25:18.700 14:07:58 -- host/auth.sh@56 -- # len=32 00:25:18.700 14:07:58 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:18.700 14:07:58 -- host/auth.sh@57 -- # key=4b6c6f51cadeb4c2a5915a809e9f78a5 00:25:18.700 14:07:58 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:25:18.700 14:07:58 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.8q7 00:25:18.700 14:07:58 -- host/auth.sh@59 -- # format_dhchap_key 4b6c6f51cadeb4c2a5915a809e9f78a5 0 00:25:18.700 14:07:58 -- nvmf/common.sh@708 -- # format_key DHHC-1 4b6c6f51cadeb4c2a5915a809e9f78a5 0 00:25:18.700 14:07:58 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:18.700 14:07:58 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:18.700 14:07:58 -- nvmf/common.sh@693 -- # key=4b6c6f51cadeb4c2a5915a809e9f78a5 00:25:18.700 14:07:58 -- nvmf/common.sh@693 -- # digest=0 00:25:18.700 14:07:58 -- nvmf/common.sh@694 -- # python - 00:25:18.700 14:07:58 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.8q7 00:25:18.700 14:07:58 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.8q7 00:25:18.700 14:07:58 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.8q7 00:25:18.700 14:07:58 -- host/auth.sh@82 -- # gen_key null 48 00:25:18.700 14:07:58 -- host/auth.sh@53 -- # local digest len file key 00:25:18.700 14:07:58 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:18.700 14:07:58 -- host/auth.sh@54 -- # local -A digests 00:25:18.700 14:07:58 -- host/auth.sh@56 -- # digest=null 00:25:18.700 14:07:58 -- host/auth.sh@56 -- # len=48 00:25:18.700 14:07:58 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:18.700 14:07:58 -- host/auth.sh@57 -- # key=1e4a6a246c9279ff95fb471513e133ba66e337d1e590aac4 00:25:18.700 14:07:58 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:25:18.700 14:07:58 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.6DU 00:25:18.700 14:07:58 -- host/auth.sh@59 -- # format_dhchap_key 1e4a6a246c9279ff95fb471513e133ba66e337d1e590aac4 0 00:25:18.700 14:07:58 -- nvmf/common.sh@708 -- # format_key DHHC-1 1e4a6a246c9279ff95fb471513e133ba66e337d1e590aac4 0 00:25:18.700 14:07:58 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:18.700 14:07:58 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:18.700 14:07:58 -- nvmf/common.sh@693 -- # key=1e4a6a246c9279ff95fb471513e133ba66e337d1e590aac4 00:25:18.700 14:07:58 -- nvmf/common.sh@693 -- # digest=0 00:25:18.700 14:07:58 -- nvmf/common.sh@694 -- # python - 00:25:18.700 14:07:58 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.6DU 00:25:18.700 14:07:58 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.6DU 00:25:18.700 14:07:58 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.6DU 00:25:18.700 14:07:58 -- host/auth.sh@83 -- # gen_key sha256 32 00:25:18.700 14:07:58 -- host/auth.sh@53 -- # local digest len file key 00:25:18.700 14:07:58 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:18.700 14:07:58 -- host/auth.sh@54 -- # local -A digests 00:25:18.700 14:07:58 -- host/auth.sh@56 -- # digest=sha256 00:25:18.700 14:07:58 -- host/auth.sh@56 -- # len=32 00:25:18.700 14:07:58 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:18.700 14:07:58 -- host/auth.sh@57 -- # key=600683d07700dd3a192a533f16f4a7d8 00:25:18.700 14:07:58 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:25:18.700 14:07:58 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.ITx 00:25:18.700 14:07:58 -- host/auth.sh@59 -- # format_dhchap_key 600683d07700dd3a192a533f16f4a7d8 1 00:25:18.700 14:07:58 -- nvmf/common.sh@708 -- # format_key DHHC-1 600683d07700dd3a192a533f16f4a7d8 1 00:25:18.700 14:07:58 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:18.700 14:07:58 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:18.700 14:07:58 -- nvmf/common.sh@693 -- # key=600683d07700dd3a192a533f16f4a7d8 00:25:18.700 14:07:58 -- nvmf/common.sh@693 -- # digest=1 00:25:18.700 14:07:58 -- nvmf/common.sh@694 -- # python - 00:25:18.700 14:07:58 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.ITx 00:25:18.700 14:07:58 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.ITx 00:25:18.700 14:07:58 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.ITx 00:25:18.700 14:07:58 -- host/auth.sh@84 -- # gen_key sha384 48 00:25:18.700 14:07:58 -- host/auth.sh@53 -- # local digest len file key 00:25:18.700 14:07:58 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:18.700 14:07:58 -- host/auth.sh@54 -- # local -A digests 00:25:18.700 14:07:58 -- host/auth.sh@56 -- # digest=sha384 00:25:18.700 14:07:58 -- host/auth.sh@56 -- # len=48 00:25:18.700 14:07:58 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:18.700 14:07:58 -- host/auth.sh@57 -- # key=756a02da7e0abb76a6d785ccf796bda452493458dc3dc3ed 00:25:18.700 14:07:58 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:25:18.700 14:07:58 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.1Z4 00:25:18.700 14:07:58 -- host/auth.sh@59 -- # format_dhchap_key 756a02da7e0abb76a6d785ccf796bda452493458dc3dc3ed 2 00:25:18.700 14:07:58 -- nvmf/common.sh@708 -- # format_key DHHC-1 756a02da7e0abb76a6d785ccf796bda452493458dc3dc3ed 2 00:25:18.700 14:07:58 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:18.700 14:07:58 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:18.700 14:07:58 -- nvmf/common.sh@693 -- # key=756a02da7e0abb76a6d785ccf796bda452493458dc3dc3ed 00:25:18.700 14:07:58 -- nvmf/common.sh@693 -- # digest=2 00:25:18.700 14:07:58 -- nvmf/common.sh@694 -- # python - 00:25:18.959 14:07:58 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.1Z4 00:25:18.959 14:07:58 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.1Z4 00:25:18.959 14:07:58 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.1Z4 00:25:18.959 14:07:58 -- host/auth.sh@85 -- # gen_key sha512 64 00:25:18.959 14:07:58 -- host/auth.sh@53 -- # local digest len file key 00:25:18.959 14:07:58 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:18.959 14:07:58 -- host/auth.sh@54 -- # local -A digests 00:25:18.959 14:07:58 -- host/auth.sh@56 -- # digest=sha512 00:25:18.959 14:07:58 -- host/auth.sh@56 -- # len=64 00:25:18.959 14:07:58 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:18.959 14:07:58 -- host/auth.sh@57 -- # key=c790e3627a97d396de28c40b95ebf7228eba09949c0730719679a3b0236a47d9 00:25:18.959 14:07:58 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:25:18.959 14:07:58 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.bzj 00:25:18.960 14:07:58 -- host/auth.sh@59 -- # format_dhchap_key c790e3627a97d396de28c40b95ebf7228eba09949c0730719679a3b0236a47d9 3 00:25:18.960 14:07:58 -- nvmf/common.sh@708 -- # format_key DHHC-1 c790e3627a97d396de28c40b95ebf7228eba09949c0730719679a3b0236a47d9 3 00:25:18.960 14:07:58 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:18.960 14:07:58 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:25:18.960 14:07:58 -- nvmf/common.sh@693 -- # key=c790e3627a97d396de28c40b95ebf7228eba09949c0730719679a3b0236a47d9 00:25:18.960 14:07:58 -- nvmf/common.sh@693 -- # digest=3 00:25:18.960 14:07:58 -- nvmf/common.sh@694 -- # python - 00:25:18.960 14:07:58 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.bzj 00:25:18.960 14:07:58 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.bzj 00:25:18.960 14:07:58 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.bzj 00:25:18.960 14:07:58 -- host/auth.sh@87 -- # waitforlisten 86026 00:25:18.960 14:07:58 -- common/autotest_common.sh@817 -- # '[' -z 86026 ']' 00:25:18.960 14:07:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.960 14:07:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:18.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.960 14:07:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.960 14:07:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:18.960 14:07:58 -- common/autotest_common.sh@10 -- # set +x 00:25:19.219 14:07:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:19.219 14:07:58 -- common/autotest_common.sh@850 -- # return 0 00:25:19.219 14:07:58 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:19.219 14:07:58 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8q7 00:25:19.219 14:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.219 14:07:58 -- common/autotest_common.sh@10 -- # set +x 00:25:19.219 14:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.219 14:07:58 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:19.219 14:07:58 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.6DU 00:25:19.219 14:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.219 14:07:58 -- common/autotest_common.sh@10 -- # set +x 00:25:19.219 14:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.219 14:07:58 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:19.219 14:07:58 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ITx 00:25:19.219 14:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.219 14:07:58 -- common/autotest_common.sh@10 -- # set +x 00:25:19.219 14:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.219 14:07:58 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:19.219 14:07:58 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.1Z4 00:25:19.219 14:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.219 14:07:58 -- common/autotest_common.sh@10 -- # set +x 00:25:19.219 14:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.219 14:07:58 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:25:19.219 14:07:58 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.bzj 00:25:19.219 14:07:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:19.219 14:07:58 -- common/autotest_common.sh@10 -- # set +x 00:25:19.219 14:07:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:19.219 14:07:58 -- host/auth.sh@92 -- # nvmet_auth_init 00:25:19.219 14:07:58 -- host/auth.sh@35 -- # get_main_ns_ip 00:25:19.219 14:07:58 -- nvmf/common.sh@717 -- # local ip 00:25:19.219 14:07:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:19.219 14:07:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:19.219 14:07:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:19.219 14:07:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:19.219 14:07:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:19.219 14:07:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:19.219 14:07:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:19.219 14:07:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:19.219 14:07:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:19.219 14:07:58 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:19.219 14:07:58 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:19.219 14:07:58 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:25:19.219 14:07:58 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:19.219 14:07:58 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:19.219 14:07:58 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:19.219 14:07:58 -- nvmf/common.sh@628 -- # local block nvme 00:25:19.219 14:07:58 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:25:19.219 14:07:58 -- nvmf/common.sh@631 -- # modprobe nvmet 00:25:19.219 14:07:58 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:19.219 14:07:58 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:19.787 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:19.787 Waiting for block devices as requested 00:25:19.787 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:19.787 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:20.724 14:08:00 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:20.724 14:08:00 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:20.724 14:08:00 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:25:20.724 14:08:00 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:25:20.724 14:08:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:20.724 14:08:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:20.724 14:08:00 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:25:20.724 14:08:00 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:20.724 14:08:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:25:20.724 No valid GPT data, bailing 00:25:20.724 14:08:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:20.724 14:08:00 -- scripts/common.sh@391 -- # pt= 00:25:20.724 14:08:00 -- scripts/common.sh@392 -- # return 1 00:25:20.724 14:08:00 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:25:20.724 14:08:00 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:20.724 14:08:00 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:25:20.724 14:08:00 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:25:20.724 14:08:00 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:25:20.724 14:08:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:25:20.724 14:08:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:20.724 14:08:00 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:25:20.724 14:08:00 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:25:20.724 14:08:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:25:20.724 No valid GPT data, bailing 00:25:20.724 14:08:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:25:20.724 14:08:00 -- scripts/common.sh@391 -- # pt= 00:25:20.724 14:08:00 -- scripts/common.sh@392 -- # return 1 00:25:20.724 14:08:00 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:25:20.724 14:08:00 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:20.724 14:08:00 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:25:20.724 14:08:00 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:25:20.724 14:08:00 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:25:20.724 14:08:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:25:20.724 14:08:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:20.724 14:08:00 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:25:20.724 14:08:00 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:25:20.724 14:08:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:25:20.724 No valid GPT data, bailing 00:25:20.724 14:08:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:25:20.984 14:08:00 -- scripts/common.sh@391 -- # pt= 00:25:20.984 14:08:00 -- scripts/common.sh@392 -- # return 1 00:25:20.984 14:08:00 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:25:20.984 14:08:00 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:25:20.984 14:08:00 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:25:20.984 14:08:00 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:25:20.984 14:08:00 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:25:20.984 14:08:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:25:20.984 14:08:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:25:20.984 14:08:00 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:25:20.984 14:08:00 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:25:20.984 14:08:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:25:20.984 No valid GPT data, bailing 00:25:20.984 14:08:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:25:20.984 14:08:00 -- scripts/common.sh@391 -- # pt= 00:25:20.984 14:08:00 -- scripts/common.sh@392 -- # return 1 00:25:20.984 14:08:00 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:25:20.984 14:08:00 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:25:20.984 14:08:00 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:20.984 14:08:00 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:20.984 14:08:00 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:20.984 14:08:00 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:20.984 14:08:00 -- nvmf/common.sh@656 -- # echo 1 00:25:20.984 14:08:00 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:25:20.984 14:08:00 -- nvmf/common.sh@658 -- # echo 1 00:25:20.984 14:08:00 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:25:20.984 14:08:00 -- nvmf/common.sh@661 -- # echo tcp 00:25:20.984 14:08:00 -- nvmf/common.sh@662 -- # echo 4420 00:25:20.984 14:08:00 -- nvmf/common.sh@663 -- # echo ipv4 00:25:20.984 14:08:00 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:20.984 14:08:00 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -a 10.0.0.1 -t tcp -s 4420 00:25:20.984 00:25:20.984 Discovery Log Number of Records 2, Generation counter 2 00:25:20.984 =====Discovery Log Entry 0====== 00:25:20.984 trtype: tcp 00:25:20.984 adrfam: ipv4 00:25:20.984 subtype: current discovery subsystem 00:25:20.984 treq: not specified, sq flow control disable supported 00:25:20.984 portid: 1 00:25:20.984 trsvcid: 4420 00:25:20.984 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:20.984 traddr: 10.0.0.1 00:25:20.984 eflags: none 00:25:20.984 sectype: none 00:25:20.984 =====Discovery Log Entry 1====== 00:25:20.984 trtype: tcp 00:25:20.984 adrfam: ipv4 00:25:20.984 subtype: nvme subsystem 00:25:20.984 treq: not specified, sq flow control disable supported 00:25:20.984 portid: 1 00:25:20.984 trsvcid: 4420 00:25:20.984 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:20.984 traddr: 10.0.0.1 00:25:20.984 eflags: none 00:25:20.984 sectype: none 00:25:20.984 14:08:00 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:20.984 14:08:00 -- host/auth.sh@37 -- # echo 0 00:25:20.984 14:08:00 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:20.984 14:08:00 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:20.984 14:08:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:20.984 14:08:00 -- host/auth.sh@44 -- # digest=sha256 00:25:20.984 14:08:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.984 14:08:00 -- host/auth.sh@44 -- # keyid=1 00:25:20.984 14:08:00 -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:20.984 14:08:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:20.984 14:08:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:21.244 14:08:00 -- host/auth.sh@49 -- # echo DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:21.244 14:08:00 -- host/auth.sh@100 -- # IFS=, 00:25:21.244 14:08:00 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:25:21.244 14:08:00 -- host/auth.sh@100 -- # IFS=, 00:25:21.244 14:08:00 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:21.244 14:08:00 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:21.244 14:08:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:21.244 14:08:00 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:25:21.244 14:08:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:21.244 14:08:00 -- host/auth.sh@68 -- # keyid=1 00:25:21.244 14:08:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:21.244 14:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.244 14:08:00 -- common/autotest_common.sh@10 -- # set +x 00:25:21.244 14:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.244 14:08:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:21.244 14:08:00 -- nvmf/common.sh@717 -- # local ip 00:25:21.244 14:08:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:21.244 14:08:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:21.244 14:08:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.244 14:08:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.244 14:08:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:21.244 14:08:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.244 14:08:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:21.244 14:08:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:21.244 14:08:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:21.244 14:08:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:21.244 14:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.244 14:08:00 -- common/autotest_common.sh@10 -- # set +x 00:25:21.244 nvme0n1 00:25:21.244 14:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.244 14:08:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.244 14:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.244 14:08:00 -- common/autotest_common.sh@10 -- # set +x 00:25:21.244 14:08:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:21.244 14:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.244 14:08:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.244 14:08:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.244 14:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.244 14:08:00 -- common/autotest_common.sh@10 -- # set +x 00:25:21.244 14:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.244 14:08:00 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:25:21.244 14:08:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.244 14:08:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:21.244 14:08:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:21.244 14:08:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:21.244 14:08:00 -- host/auth.sh@44 -- # digest=sha256 00:25:21.244 14:08:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.244 14:08:00 -- host/auth.sh@44 -- # keyid=0 00:25:21.244 14:08:00 -- host/auth.sh@45 -- # key=DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:21.244 14:08:00 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:21.245 14:08:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:21.245 14:08:00 -- host/auth.sh@49 -- # echo DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:21.245 14:08:00 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:25:21.245 14:08:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:21.245 14:08:00 -- host/auth.sh@68 -- # digest=sha256 00:25:21.245 14:08:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:21.245 14:08:00 -- host/auth.sh@68 -- # keyid=0 00:25:21.245 14:08:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:21.245 14:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.245 14:08:00 -- common/autotest_common.sh@10 -- # set +x 00:25:21.245 14:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.245 14:08:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:21.245 14:08:00 -- nvmf/common.sh@717 -- # local ip 00:25:21.245 14:08:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:21.245 14:08:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:21.245 14:08:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.245 14:08:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.245 14:08:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:21.245 14:08:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.245 14:08:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:21.245 14:08:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:21.245 14:08:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:21.245 14:08:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:21.245 14:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.245 14:08:00 -- common/autotest_common.sh@10 -- # set +x 00:25:21.504 nvme0n1 00:25:21.504 14:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.504 14:08:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.504 14:08:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:21.504 14:08:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.504 14:08:00 -- common/autotest_common.sh@10 -- # set +x 00:25:21.504 14:08:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.504 14:08:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.504 14:08:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.504 14:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.504 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:21.504 14:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.504 14:08:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:21.504 14:08:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:21.504 14:08:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:21.504 14:08:01 -- host/auth.sh@44 -- # digest=sha256 00:25:21.504 14:08:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.504 14:08:01 -- host/auth.sh@44 -- # keyid=1 00:25:21.504 14:08:01 -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:21.504 14:08:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:21.504 14:08:01 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:21.504 14:08:01 -- host/auth.sh@49 -- # echo DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:21.504 14:08:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:25:21.504 14:08:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:21.504 14:08:01 -- host/auth.sh@68 -- # digest=sha256 00:25:21.504 14:08:01 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:21.504 14:08:01 -- host/auth.sh@68 -- # keyid=1 00:25:21.504 14:08:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:21.504 14:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.504 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:21.504 14:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.504 14:08:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:21.504 14:08:01 -- nvmf/common.sh@717 -- # local ip 00:25:21.504 14:08:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:21.504 14:08:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:21.504 14:08:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.504 14:08:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.504 14:08:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:21.504 14:08:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.504 14:08:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:21.504 14:08:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:21.504 14:08:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:21.504 14:08:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:21.504 14:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.504 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:21.504 nvme0n1 00:25:21.505 14:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.505 14:08:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.505 14:08:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:21.505 14:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.505 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:21.763 14:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.763 14:08:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.763 14:08:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.763 14:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.763 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:21.763 14:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.763 14:08:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:21.763 14:08:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:21.763 14:08:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:21.763 14:08:01 -- host/auth.sh@44 -- # digest=sha256 00:25:21.764 14:08:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.764 14:08:01 -- host/auth.sh@44 -- # keyid=2 00:25:21.764 14:08:01 -- host/auth.sh@45 -- # key=DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:21.764 14:08:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:21.764 14:08:01 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:21.764 14:08:01 -- host/auth.sh@49 -- # echo DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:21.764 14:08:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:25:21.764 14:08:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:21.764 14:08:01 -- host/auth.sh@68 -- # digest=sha256 00:25:21.764 14:08:01 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:21.764 14:08:01 -- host/auth.sh@68 -- # keyid=2 00:25:21.764 14:08:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:21.764 14:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.764 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:21.764 14:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.764 14:08:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:21.764 14:08:01 -- nvmf/common.sh@717 -- # local ip 00:25:21.764 14:08:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:21.764 14:08:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:21.764 14:08:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:21.764 14:08:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:21.764 14:08:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:21.764 14:08:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:21.764 14:08:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:21.764 14:08:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:21.764 14:08:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:21.764 14:08:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:21.764 14:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.764 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:21.764 nvme0n1 00:25:21.764 14:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.764 14:08:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.764 14:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.764 14:08:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:21.764 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:21.764 14:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.764 14:08:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.764 14:08:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.764 14:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.764 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:21.764 14:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.764 14:08:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:21.764 14:08:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:21.764 14:08:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:21.764 14:08:01 -- host/auth.sh@44 -- # digest=sha256 00:25:21.764 14:08:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.764 14:08:01 -- host/auth.sh@44 -- # keyid=3 00:25:21.764 14:08:01 -- host/auth.sh@45 -- # key=DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:21.764 14:08:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:21.764 14:08:01 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:21.764 14:08:01 -- host/auth.sh@49 -- # echo DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:21.764 14:08:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:25:21.764 14:08:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:21.764 14:08:01 -- host/auth.sh@68 -- # digest=sha256 00:25:21.764 14:08:01 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:21.764 14:08:01 -- host/auth.sh@68 -- # keyid=3 00:25:21.764 14:08:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:21.764 14:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.764 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:21.764 14:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.764 14:08:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:21.764 14:08:01 -- nvmf/common.sh@717 -- # local ip 00:25:22.023 14:08:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:22.023 14:08:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:22.023 14:08:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.023 14:08:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.023 14:08:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:22.023 14:08:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.023 14:08:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:22.023 14:08:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:22.023 14:08:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:22.023 14:08:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:22.023 14:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.023 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:22.023 nvme0n1 00:25:22.023 14:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.023 14:08:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.023 14:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.023 14:08:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:22.023 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:22.023 14:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.023 14:08:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.023 14:08:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.023 14:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.023 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:22.023 14:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.023 14:08:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:22.023 14:08:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:22.023 14:08:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:22.023 14:08:01 -- host/auth.sh@44 -- # digest=sha256 00:25:22.023 14:08:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:22.023 14:08:01 -- host/auth.sh@44 -- # keyid=4 00:25:22.023 14:08:01 -- host/auth.sh@45 -- # key=DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:22.023 14:08:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:22.023 14:08:01 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:22.023 14:08:01 -- host/auth.sh@49 -- # echo DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:22.023 14:08:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:25:22.023 14:08:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:22.023 14:08:01 -- host/auth.sh@68 -- # digest=sha256 00:25:22.023 14:08:01 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:22.023 14:08:01 -- host/auth.sh@68 -- # keyid=4 00:25:22.023 14:08:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:22.023 14:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.023 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:22.023 14:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.023 14:08:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:22.023 14:08:01 -- nvmf/common.sh@717 -- # local ip 00:25:22.023 14:08:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:22.023 14:08:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:22.023 14:08:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.023 14:08:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.023 14:08:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:22.023 14:08:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.023 14:08:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:22.023 14:08:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:22.023 14:08:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:22.023 14:08:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:22.023 14:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.023 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:22.282 nvme0n1 00:25:22.282 14:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.282 14:08:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.282 14:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.282 14:08:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:22.282 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:22.282 14:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.282 14:08:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.282 14:08:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.282 14:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.282 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:22.282 14:08:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.282 14:08:01 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:22.282 14:08:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:22.282 14:08:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:22.282 14:08:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:22.282 14:08:01 -- host/auth.sh@44 -- # digest=sha256 00:25:22.282 14:08:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.282 14:08:01 -- host/auth.sh@44 -- # keyid=0 00:25:22.282 14:08:01 -- host/auth.sh@45 -- # key=DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:22.282 14:08:01 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:22.282 14:08:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:22.540 14:08:01 -- host/auth.sh@49 -- # echo DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:22.540 14:08:01 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:25:22.540 14:08:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:22.540 14:08:01 -- host/auth.sh@68 -- # digest=sha256 00:25:22.540 14:08:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:22.540 14:08:01 -- host/auth.sh@68 -- # keyid=0 00:25:22.540 14:08:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:22.540 14:08:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.540 14:08:01 -- common/autotest_common.sh@10 -- # set +x 00:25:22.540 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.540 14:08:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:22.540 14:08:02 -- nvmf/common.sh@717 -- # local ip 00:25:22.540 14:08:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:22.540 14:08:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:22.540 14:08:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.540 14:08:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.540 14:08:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:22.540 14:08:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.540 14:08:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:22.540 14:08:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:22.540 14:08:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:22.540 14:08:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:22.540 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.540 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:22.540 nvme0n1 00:25:22.540 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.540 14:08:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.540 14:08:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:22.540 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.540 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:22.540 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.540 14:08:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.540 14:08:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.540 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.540 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:22.540 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.540 14:08:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:22.540 14:08:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:22.540 14:08:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:22.540 14:08:02 -- host/auth.sh@44 -- # digest=sha256 00:25:22.540 14:08:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.540 14:08:02 -- host/auth.sh@44 -- # keyid=1 00:25:22.540 14:08:02 -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:22.540 14:08:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:22.540 14:08:02 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:22.540 14:08:02 -- host/auth.sh@49 -- # echo DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:22.540 14:08:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:25:22.541 14:08:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:22.541 14:08:02 -- host/auth.sh@68 -- # digest=sha256 00:25:22.541 14:08:02 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:22.541 14:08:02 -- host/auth.sh@68 -- # keyid=1 00:25:22.541 14:08:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:22.541 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.541 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:22.541 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.541 14:08:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:22.541 14:08:02 -- nvmf/common.sh@717 -- # local ip 00:25:22.541 14:08:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:22.541 14:08:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:22.541 14:08:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.541 14:08:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.541 14:08:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:22.541 14:08:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.541 14:08:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:22.541 14:08:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:22.541 14:08:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:22.541 14:08:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:22.800 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.800 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:22.800 nvme0n1 00:25:22.800 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.800 14:08:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.800 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.800 14:08:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:22.800 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:22.800 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.800 14:08:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.800 14:08:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.800 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.800 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:22.800 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.800 14:08:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:22.800 14:08:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:22.800 14:08:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:22.800 14:08:02 -- host/auth.sh@44 -- # digest=sha256 00:25:22.800 14:08:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.800 14:08:02 -- host/auth.sh@44 -- # keyid=2 00:25:22.800 14:08:02 -- host/auth.sh@45 -- # key=DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:22.800 14:08:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:22.800 14:08:02 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:22.800 14:08:02 -- host/auth.sh@49 -- # echo DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:22.800 14:08:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:25:22.800 14:08:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:22.800 14:08:02 -- host/auth.sh@68 -- # digest=sha256 00:25:22.800 14:08:02 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:22.800 14:08:02 -- host/auth.sh@68 -- # keyid=2 00:25:22.800 14:08:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:22.800 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.800 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:22.800 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:22.800 14:08:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:22.800 14:08:02 -- nvmf/common.sh@717 -- # local ip 00:25:22.800 14:08:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:22.800 14:08:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:22.800 14:08:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:22.800 14:08:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:22.800 14:08:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:22.800 14:08:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:22.800 14:08:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:22.800 14:08:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:22.800 14:08:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:22.800 14:08:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:22.800 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:22.800 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.062 nvme0n1 00:25:23.062 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.062 14:08:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.062 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.062 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.062 14:08:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:23.062 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.062 14:08:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.062 14:08:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.062 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.062 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.062 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.062 14:08:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:23.062 14:08:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:23.062 14:08:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:23.062 14:08:02 -- host/auth.sh@44 -- # digest=sha256 00:25:23.062 14:08:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.062 14:08:02 -- host/auth.sh@44 -- # keyid=3 00:25:23.062 14:08:02 -- host/auth.sh@45 -- # key=DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:23.062 14:08:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:23.062 14:08:02 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:23.062 14:08:02 -- host/auth.sh@49 -- # echo DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:23.062 14:08:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:25:23.062 14:08:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:23.062 14:08:02 -- host/auth.sh@68 -- # digest=sha256 00:25:23.062 14:08:02 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:23.062 14:08:02 -- host/auth.sh@68 -- # keyid=3 00:25:23.062 14:08:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:23.062 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.062 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.062 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.062 14:08:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:23.062 14:08:02 -- nvmf/common.sh@717 -- # local ip 00:25:23.062 14:08:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:23.062 14:08:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:23.062 14:08:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.062 14:08:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.062 14:08:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:23.062 14:08:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.062 14:08:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:23.062 14:08:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:23.062 14:08:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:23.062 14:08:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:23.062 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.062 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.062 nvme0n1 00:25:23.062 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.062 14:08:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.062 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.062 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.062 14:08:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:23.062 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.330 14:08:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.330 14:08:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.330 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.331 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.331 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.331 14:08:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:23.331 14:08:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:23.331 14:08:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:23.331 14:08:02 -- host/auth.sh@44 -- # digest=sha256 00:25:23.331 14:08:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:23.331 14:08:02 -- host/auth.sh@44 -- # keyid=4 00:25:23.331 14:08:02 -- host/auth.sh@45 -- # key=DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:23.331 14:08:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:23.331 14:08:02 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:23.331 14:08:02 -- host/auth.sh@49 -- # echo DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:23.331 14:08:02 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:25:23.331 14:08:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:23.331 14:08:02 -- host/auth.sh@68 -- # digest=sha256 00:25:23.331 14:08:02 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:23.331 14:08:02 -- host/auth.sh@68 -- # keyid=4 00:25:23.331 14:08:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:23.331 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.331 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.331 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.331 14:08:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:23.331 14:08:02 -- nvmf/common.sh@717 -- # local ip 00:25:23.331 14:08:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:23.331 14:08:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:23.331 14:08:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.331 14:08:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.331 14:08:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:23.331 14:08:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.331 14:08:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:23.331 14:08:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:23.331 14:08:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:23.331 14:08:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:23.331 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.331 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.331 nvme0n1 00:25:23.331 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.331 14:08:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:23.331 14:08:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.331 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.331 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.331 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.331 14:08:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.331 14:08:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.331 14:08:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.331 14:08:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.331 14:08:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.331 14:08:02 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:23.331 14:08:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:23.331 14:08:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:23.331 14:08:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:23.331 14:08:02 -- host/auth.sh@44 -- # digest=sha256 00:25:23.331 14:08:02 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.331 14:08:02 -- host/auth.sh@44 -- # keyid=0 00:25:23.331 14:08:02 -- host/auth.sh@45 -- # key=DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:23.331 14:08:02 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:23.331 14:08:02 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:23.896 14:08:03 -- host/auth.sh@49 -- # echo DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:23.896 14:08:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:25:23.896 14:08:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:23.896 14:08:03 -- host/auth.sh@68 -- # digest=sha256 00:25:23.896 14:08:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:23.896 14:08:03 -- host/auth.sh@68 -- # keyid=0 00:25:23.896 14:08:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:23.896 14:08:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.896 14:08:03 -- common/autotest_common.sh@10 -- # set +x 00:25:23.896 14:08:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:23.896 14:08:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:23.896 14:08:03 -- nvmf/common.sh@717 -- # local ip 00:25:23.896 14:08:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:23.896 14:08:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:23.896 14:08:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:23.896 14:08:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:23.896 14:08:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:23.896 14:08:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:23.896 14:08:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:23.896 14:08:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:23.896 14:08:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:23.896 14:08:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:23.896 14:08:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:23.896 14:08:03 -- common/autotest_common.sh@10 -- # set +x 00:25:24.154 nvme0n1 00:25:24.154 14:08:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.154 14:08:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.154 14:08:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.154 14:08:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:24.154 14:08:03 -- common/autotest_common.sh@10 -- # set +x 00:25:24.154 14:08:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.154 14:08:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.154 14:08:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.154 14:08:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.154 14:08:03 -- common/autotest_common.sh@10 -- # set +x 00:25:24.154 14:08:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.154 14:08:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:24.154 14:08:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:24.154 14:08:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:24.154 14:08:03 -- host/auth.sh@44 -- # digest=sha256 00:25:24.154 14:08:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.154 14:08:03 -- host/auth.sh@44 -- # keyid=1 00:25:24.154 14:08:03 -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:24.154 14:08:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:24.154 14:08:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:24.154 14:08:03 -- host/auth.sh@49 -- # echo DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:24.154 14:08:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:25:24.154 14:08:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:24.154 14:08:03 -- host/auth.sh@68 -- # digest=sha256 00:25:24.154 14:08:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:24.154 14:08:03 -- host/auth.sh@68 -- # keyid=1 00:25:24.154 14:08:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:24.154 14:08:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.154 14:08:03 -- common/autotest_common.sh@10 -- # set +x 00:25:24.154 14:08:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.154 14:08:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:24.154 14:08:03 -- nvmf/common.sh@717 -- # local ip 00:25:24.154 14:08:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:24.154 14:08:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:24.154 14:08:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.154 14:08:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.154 14:08:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:24.154 14:08:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.154 14:08:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:24.154 14:08:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:24.154 14:08:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:24.154 14:08:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:24.154 14:08:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.154 14:08:03 -- common/autotest_common.sh@10 -- # set +x 00:25:24.413 nvme0n1 00:25:24.413 14:08:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.413 14:08:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.413 14:08:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.413 14:08:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:24.413 14:08:03 -- common/autotest_common.sh@10 -- # set +x 00:25:24.413 14:08:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.413 14:08:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.413 14:08:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.413 14:08:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.413 14:08:03 -- common/autotest_common.sh@10 -- # set +x 00:25:24.413 14:08:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.413 14:08:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:24.413 14:08:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:24.413 14:08:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:24.413 14:08:03 -- host/auth.sh@44 -- # digest=sha256 00:25:24.413 14:08:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.413 14:08:03 -- host/auth.sh@44 -- # keyid=2 00:25:24.413 14:08:03 -- host/auth.sh@45 -- # key=DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:24.413 14:08:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:24.413 14:08:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:24.413 14:08:03 -- host/auth.sh@49 -- # echo DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:24.413 14:08:03 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:25:24.413 14:08:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:24.413 14:08:03 -- host/auth.sh@68 -- # digest=sha256 00:25:24.413 14:08:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:24.413 14:08:03 -- host/auth.sh@68 -- # keyid=2 00:25:24.413 14:08:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:24.413 14:08:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.413 14:08:03 -- common/autotest_common.sh@10 -- # set +x 00:25:24.413 14:08:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.413 14:08:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:24.413 14:08:03 -- nvmf/common.sh@717 -- # local ip 00:25:24.413 14:08:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:24.413 14:08:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:24.413 14:08:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.413 14:08:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.413 14:08:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:24.413 14:08:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.413 14:08:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:24.413 14:08:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:24.413 14:08:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:24.413 14:08:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:24.413 14:08:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.413 14:08:03 -- common/autotest_common.sh@10 -- # set +x 00:25:24.671 nvme0n1 00:25:24.671 14:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.671 14:08:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.671 14:08:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:24.671 14:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.671 14:08:04 -- common/autotest_common.sh@10 -- # set +x 00:25:24.671 14:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.671 14:08:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.671 14:08:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.671 14:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.671 14:08:04 -- common/autotest_common.sh@10 -- # set +x 00:25:24.671 14:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.671 14:08:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:24.671 14:08:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:24.671 14:08:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:24.671 14:08:04 -- host/auth.sh@44 -- # digest=sha256 00:25:24.671 14:08:04 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.671 14:08:04 -- host/auth.sh@44 -- # keyid=3 00:25:24.671 14:08:04 -- host/auth.sh@45 -- # key=DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:24.671 14:08:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:24.671 14:08:04 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:24.671 14:08:04 -- host/auth.sh@49 -- # echo DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:24.671 14:08:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:25:24.671 14:08:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:24.671 14:08:04 -- host/auth.sh@68 -- # digest=sha256 00:25:24.671 14:08:04 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:24.671 14:08:04 -- host/auth.sh@68 -- # keyid=3 00:25:24.671 14:08:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:24.671 14:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.671 14:08:04 -- common/autotest_common.sh@10 -- # set +x 00:25:24.671 14:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.671 14:08:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:24.671 14:08:04 -- nvmf/common.sh@717 -- # local ip 00:25:24.671 14:08:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:24.671 14:08:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:24.671 14:08:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.671 14:08:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.671 14:08:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:24.671 14:08:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.671 14:08:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:24.671 14:08:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:24.671 14:08:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:24.671 14:08:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:24.671 14:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.671 14:08:04 -- common/autotest_common.sh@10 -- # set +x 00:25:24.929 nvme0n1 00:25:24.929 14:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.929 14:08:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.929 14:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.929 14:08:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:24.929 14:08:04 -- common/autotest_common.sh@10 -- # set +x 00:25:24.929 14:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.929 14:08:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.929 14:08:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.929 14:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.929 14:08:04 -- common/autotest_common.sh@10 -- # set +x 00:25:24.929 14:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.929 14:08:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:24.929 14:08:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:24.929 14:08:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:24.929 14:08:04 -- host/auth.sh@44 -- # digest=sha256 00:25:24.929 14:08:04 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.929 14:08:04 -- host/auth.sh@44 -- # keyid=4 00:25:24.929 14:08:04 -- host/auth.sh@45 -- # key=DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:24.929 14:08:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:24.929 14:08:04 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:24.929 14:08:04 -- host/auth.sh@49 -- # echo DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:24.929 14:08:04 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:25:24.929 14:08:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:24.929 14:08:04 -- host/auth.sh@68 -- # digest=sha256 00:25:24.929 14:08:04 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:24.929 14:08:04 -- host/auth.sh@68 -- # keyid=4 00:25:24.929 14:08:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:24.929 14:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.929 14:08:04 -- common/autotest_common.sh@10 -- # set +x 00:25:24.929 14:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.929 14:08:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:24.929 14:08:04 -- nvmf/common.sh@717 -- # local ip 00:25:24.929 14:08:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:24.929 14:08:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:24.929 14:08:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.929 14:08:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.929 14:08:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:24.929 14:08:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.929 14:08:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:24.929 14:08:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:24.929 14:08:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:24.929 14:08:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.929 14:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.929 14:08:04 -- common/autotest_common.sh@10 -- # set +x 00:25:24.929 nvme0n1 00:25:24.929 14:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.929 14:08:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.929 14:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.186 14:08:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:25.186 14:08:04 -- common/autotest_common.sh@10 -- # set +x 00:25:25.186 14:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.186 14:08:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.186 14:08:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.186 14:08:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:25.186 14:08:04 -- common/autotest_common.sh@10 -- # set +x 00:25:25.186 14:08:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:25.186 14:08:04 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:25.186 14:08:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:25.186 14:08:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:25.186 14:08:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:25.186 14:08:04 -- host/auth.sh@44 -- # digest=sha256 00:25:25.186 14:08:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.186 14:08:04 -- host/auth.sh@44 -- # keyid=0 00:25:25.186 14:08:04 -- host/auth.sh@45 -- # key=DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:25.186 14:08:04 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:25.186 14:08:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:26.561 14:08:05 -- host/auth.sh@49 -- # echo DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:26.561 14:08:05 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:25:26.561 14:08:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:26.561 14:08:05 -- host/auth.sh@68 -- # digest=sha256 00:25:26.561 14:08:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:26.561 14:08:05 -- host/auth.sh@68 -- # keyid=0 00:25:26.561 14:08:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:26.561 14:08:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.561 14:08:05 -- common/autotest_common.sh@10 -- # set +x 00:25:26.561 14:08:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.561 14:08:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:26.561 14:08:05 -- nvmf/common.sh@717 -- # local ip 00:25:26.561 14:08:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:26.561 14:08:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:26.561 14:08:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.561 14:08:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.561 14:08:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:26.561 14:08:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.561 14:08:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:26.561 14:08:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:26.561 14:08:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:26.561 14:08:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:26.561 14:08:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.561 14:08:05 -- common/autotest_common.sh@10 -- # set +x 00:25:26.819 nvme0n1 00:25:26.819 14:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.819 14:08:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.819 14:08:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:26.819 14:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.819 14:08:06 -- common/autotest_common.sh@10 -- # set +x 00:25:26.819 14:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.819 14:08:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.819 14:08:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.819 14:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.819 14:08:06 -- common/autotest_common.sh@10 -- # set +x 00:25:26.819 14:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.819 14:08:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:26.819 14:08:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:26.819 14:08:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:26.819 14:08:06 -- host/auth.sh@44 -- # digest=sha256 00:25:26.819 14:08:06 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.819 14:08:06 -- host/auth.sh@44 -- # keyid=1 00:25:26.819 14:08:06 -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:26.819 14:08:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:26.819 14:08:06 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:26.819 14:08:06 -- host/auth.sh@49 -- # echo DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:26.819 14:08:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:25:26.819 14:08:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:26.819 14:08:06 -- host/auth.sh@68 -- # digest=sha256 00:25:26.819 14:08:06 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:26.819 14:08:06 -- host/auth.sh@68 -- # keyid=1 00:25:26.819 14:08:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:26.819 14:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.819 14:08:06 -- common/autotest_common.sh@10 -- # set +x 00:25:26.819 14:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:26.819 14:08:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:26.819 14:08:06 -- nvmf/common.sh@717 -- # local ip 00:25:26.819 14:08:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:26.820 14:08:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:26.820 14:08:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:26.820 14:08:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:26.820 14:08:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:26.820 14:08:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:26.820 14:08:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:26.820 14:08:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:26.820 14:08:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:26.820 14:08:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:26.820 14:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:26.820 14:08:06 -- common/autotest_common.sh@10 -- # set +x 00:25:27.078 nvme0n1 00:25:27.078 14:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.078 14:08:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.078 14:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.078 14:08:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:27.078 14:08:06 -- common/autotest_common.sh@10 -- # set +x 00:25:27.078 14:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.078 14:08:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.078 14:08:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.078 14:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.078 14:08:06 -- common/autotest_common.sh@10 -- # set +x 00:25:27.078 14:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.078 14:08:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:27.078 14:08:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:27.078 14:08:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:27.078 14:08:06 -- host/auth.sh@44 -- # digest=sha256 00:25:27.078 14:08:06 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.078 14:08:06 -- host/auth.sh@44 -- # keyid=2 00:25:27.078 14:08:06 -- host/auth.sh@45 -- # key=DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:27.078 14:08:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:27.078 14:08:06 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:27.079 14:08:06 -- host/auth.sh@49 -- # echo DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:27.079 14:08:06 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:25:27.079 14:08:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:27.079 14:08:06 -- host/auth.sh@68 -- # digest=sha256 00:25:27.079 14:08:06 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:27.079 14:08:06 -- host/auth.sh@68 -- # keyid=2 00:25:27.079 14:08:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:27.079 14:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.079 14:08:06 -- common/autotest_common.sh@10 -- # set +x 00:25:27.079 14:08:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.079 14:08:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:27.079 14:08:06 -- nvmf/common.sh@717 -- # local ip 00:25:27.079 14:08:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:27.079 14:08:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:27.079 14:08:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.079 14:08:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.079 14:08:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:27.079 14:08:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.079 14:08:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:27.079 14:08:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:27.079 14:08:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:27.079 14:08:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:27.079 14:08:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.079 14:08:06 -- common/autotest_common.sh@10 -- # set +x 00:25:27.337 nvme0n1 00:25:27.337 14:08:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.337 14:08:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.337 14:08:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.337 14:08:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:27.337 14:08:07 -- common/autotest_common.sh@10 -- # set +x 00:25:27.594 14:08:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.594 14:08:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.594 14:08:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.594 14:08:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.594 14:08:07 -- common/autotest_common.sh@10 -- # set +x 00:25:27.594 14:08:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.594 14:08:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:27.594 14:08:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:27.595 14:08:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:27.595 14:08:07 -- host/auth.sh@44 -- # digest=sha256 00:25:27.595 14:08:07 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.595 14:08:07 -- host/auth.sh@44 -- # keyid=3 00:25:27.595 14:08:07 -- host/auth.sh@45 -- # key=DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:27.595 14:08:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:27.595 14:08:07 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:27.595 14:08:07 -- host/auth.sh@49 -- # echo DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:27.595 14:08:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:25:27.595 14:08:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:27.595 14:08:07 -- host/auth.sh@68 -- # digest=sha256 00:25:27.595 14:08:07 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:27.595 14:08:07 -- host/auth.sh@68 -- # keyid=3 00:25:27.595 14:08:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:27.595 14:08:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.595 14:08:07 -- common/autotest_common.sh@10 -- # set +x 00:25:27.595 14:08:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.595 14:08:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:27.595 14:08:07 -- nvmf/common.sh@717 -- # local ip 00:25:27.595 14:08:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:27.595 14:08:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:27.595 14:08:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.595 14:08:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.595 14:08:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:27.595 14:08:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.595 14:08:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:27.595 14:08:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:27.595 14:08:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:27.595 14:08:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:27.595 14:08:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.595 14:08:07 -- common/autotest_common.sh@10 -- # set +x 00:25:27.852 nvme0n1 00:25:27.852 14:08:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.853 14:08:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.853 14:08:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.853 14:08:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:27.853 14:08:07 -- common/autotest_common.sh@10 -- # set +x 00:25:27.853 14:08:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.853 14:08:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.853 14:08:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.853 14:08:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.853 14:08:07 -- common/autotest_common.sh@10 -- # set +x 00:25:27.853 14:08:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.853 14:08:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:27.853 14:08:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:27.853 14:08:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:27.853 14:08:07 -- host/auth.sh@44 -- # digest=sha256 00:25:27.853 14:08:07 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:27.853 14:08:07 -- host/auth.sh@44 -- # keyid=4 00:25:27.853 14:08:07 -- host/auth.sh@45 -- # key=DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:27.853 14:08:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:27.853 14:08:07 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:27.853 14:08:07 -- host/auth.sh@49 -- # echo DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:27.853 14:08:07 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:25:27.853 14:08:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:27.853 14:08:07 -- host/auth.sh@68 -- # digest=sha256 00:25:27.853 14:08:07 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:27.853 14:08:07 -- host/auth.sh@68 -- # keyid=4 00:25:27.853 14:08:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:27.853 14:08:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.853 14:08:07 -- common/autotest_common.sh@10 -- # set +x 00:25:27.853 14:08:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.853 14:08:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:27.853 14:08:07 -- nvmf/common.sh@717 -- # local ip 00:25:27.853 14:08:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:27.853 14:08:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:27.853 14:08:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.853 14:08:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.853 14:08:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:27.853 14:08:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.853 14:08:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:27.853 14:08:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:27.853 14:08:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:27.853 14:08:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:27.853 14:08:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.853 14:08:07 -- common/autotest_common.sh@10 -- # set +x 00:25:28.111 nvme0n1 00:25:28.111 14:08:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.111 14:08:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.111 14:08:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.111 14:08:07 -- common/autotest_common.sh@10 -- # set +x 00:25:28.111 14:08:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:28.111 14:08:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.373 14:08:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.373 14:08:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.373 14:08:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.373 14:08:07 -- common/autotest_common.sh@10 -- # set +x 00:25:28.373 14:08:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.373 14:08:07 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:28.373 14:08:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:28.373 14:08:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:28.373 14:08:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:28.373 14:08:07 -- host/auth.sh@44 -- # digest=sha256 00:25:28.373 14:08:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.373 14:08:07 -- host/auth.sh@44 -- # keyid=0 00:25:28.373 14:08:07 -- host/auth.sh@45 -- # key=DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:28.373 14:08:07 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:28.373 14:08:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:31.670 14:08:10 -- host/auth.sh@49 -- # echo DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:31.670 14:08:10 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:25:31.670 14:08:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:31.670 14:08:10 -- host/auth.sh@68 -- # digest=sha256 00:25:31.670 14:08:10 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:31.670 14:08:10 -- host/auth.sh@68 -- # keyid=0 00:25:31.670 14:08:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:31.670 14:08:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.670 14:08:10 -- common/autotest_common.sh@10 -- # set +x 00:25:31.670 14:08:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.670 14:08:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:31.670 14:08:10 -- nvmf/common.sh@717 -- # local ip 00:25:31.670 14:08:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:31.670 14:08:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:31.670 14:08:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.670 14:08:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.670 14:08:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:31.670 14:08:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.670 14:08:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:31.670 14:08:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:31.670 14:08:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:31.670 14:08:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:31.670 14:08:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.670 14:08:10 -- common/autotest_common.sh@10 -- # set +x 00:25:31.670 nvme0n1 00:25:31.670 14:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.670 14:08:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.670 14:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.670 14:08:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:31.670 14:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:31.670 14:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.670 14:08:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.670 14:08:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.670 14:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.670 14:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:31.670 14:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.670 14:08:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:31.670 14:08:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:31.670 14:08:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:31.670 14:08:11 -- host/auth.sh@44 -- # digest=sha256 00:25:31.670 14:08:11 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:31.670 14:08:11 -- host/auth.sh@44 -- # keyid=1 00:25:31.670 14:08:11 -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:31.670 14:08:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:31.670 14:08:11 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:31.670 14:08:11 -- host/auth.sh@49 -- # echo DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:31.670 14:08:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:25:31.670 14:08:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:31.670 14:08:11 -- host/auth.sh@68 -- # digest=sha256 00:25:31.670 14:08:11 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:31.670 14:08:11 -- host/auth.sh@68 -- # keyid=1 00:25:31.670 14:08:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:31.670 14:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.670 14:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:31.670 14:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.670 14:08:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:31.670 14:08:11 -- nvmf/common.sh@717 -- # local ip 00:25:31.670 14:08:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:31.670 14:08:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:31.670 14:08:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.670 14:08:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.670 14:08:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:31.670 14:08:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.670 14:08:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:31.670 14:08:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:31.670 14:08:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:31.670 14:08:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:31.670 14:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.670 14:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:32.238 nvme0n1 00:25:32.238 14:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.238 14:08:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.238 14:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.238 14:08:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:32.238 14:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:32.238 14:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.238 14:08:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.238 14:08:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.238 14:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.238 14:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:32.238 14:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.238 14:08:11 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:32.238 14:08:11 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:32.238 14:08:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:32.238 14:08:11 -- host/auth.sh@44 -- # digest=sha256 00:25:32.238 14:08:11 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:32.238 14:08:11 -- host/auth.sh@44 -- # keyid=2 00:25:32.238 14:08:11 -- host/auth.sh@45 -- # key=DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:32.238 14:08:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:32.238 14:08:11 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:32.238 14:08:11 -- host/auth.sh@49 -- # echo DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:32.238 14:08:11 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:25:32.238 14:08:11 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:32.238 14:08:11 -- host/auth.sh@68 -- # digest=sha256 00:25:32.238 14:08:11 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:32.238 14:08:11 -- host/auth.sh@68 -- # keyid=2 00:25:32.238 14:08:11 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:32.238 14:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.238 14:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:32.238 14:08:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.238 14:08:11 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:32.238 14:08:11 -- nvmf/common.sh@717 -- # local ip 00:25:32.238 14:08:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:32.238 14:08:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:32.238 14:08:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.238 14:08:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.238 14:08:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:32.238 14:08:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.238 14:08:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:32.238 14:08:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:32.238 14:08:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:32.238 14:08:11 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:32.238 14:08:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.238 14:08:11 -- common/autotest_common.sh@10 -- # set +x 00:25:32.806 nvme0n1 00:25:32.806 14:08:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.806 14:08:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.806 14:08:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.806 14:08:12 -- common/autotest_common.sh@10 -- # set +x 00:25:32.806 14:08:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:32.806 14:08:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.806 14:08:12 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.806 14:08:12 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.806 14:08:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.806 14:08:12 -- common/autotest_common.sh@10 -- # set +x 00:25:32.806 14:08:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.806 14:08:12 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:32.806 14:08:12 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:32.806 14:08:12 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:32.806 14:08:12 -- host/auth.sh@44 -- # digest=sha256 00:25:32.806 14:08:12 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:32.806 14:08:12 -- host/auth.sh@44 -- # keyid=3 00:25:32.806 14:08:12 -- host/auth.sh@45 -- # key=DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:32.806 14:08:12 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:32.806 14:08:12 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:32.806 14:08:12 -- host/auth.sh@49 -- # echo DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:32.806 14:08:12 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:25:32.806 14:08:12 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:32.806 14:08:12 -- host/auth.sh@68 -- # digest=sha256 00:25:32.806 14:08:12 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:32.806 14:08:12 -- host/auth.sh@68 -- # keyid=3 00:25:32.806 14:08:12 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:32.806 14:08:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.806 14:08:12 -- common/autotest_common.sh@10 -- # set +x 00:25:32.806 14:08:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.806 14:08:12 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:32.806 14:08:12 -- nvmf/common.sh@717 -- # local ip 00:25:32.806 14:08:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:32.806 14:08:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:32.806 14:08:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.806 14:08:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.806 14:08:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:32.806 14:08:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.806 14:08:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:32.806 14:08:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:32.806 14:08:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:32.806 14:08:12 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:32.806 14:08:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:32.806 14:08:12 -- common/autotest_common.sh@10 -- # set +x 00:25:33.372 nvme0n1 00:25:33.372 14:08:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.372 14:08:12 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.372 14:08:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.372 14:08:12 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:33.372 14:08:12 -- common/autotest_common.sh@10 -- # set +x 00:25:33.372 14:08:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.372 14:08:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.372 14:08:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.372 14:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.372 14:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:33.372 14:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.372 14:08:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:33.372 14:08:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:33.372 14:08:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:33.372 14:08:13 -- host/auth.sh@44 -- # digest=sha256 00:25:33.372 14:08:13 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:33.372 14:08:13 -- host/auth.sh@44 -- # keyid=4 00:25:33.372 14:08:13 -- host/auth.sh@45 -- # key=DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:33.372 14:08:13 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:33.372 14:08:13 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:33.372 14:08:13 -- host/auth.sh@49 -- # echo DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:33.372 14:08:13 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:25:33.372 14:08:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:33.372 14:08:13 -- host/auth.sh@68 -- # digest=sha256 00:25:33.372 14:08:13 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:33.372 14:08:13 -- host/auth.sh@68 -- # keyid=4 00:25:33.372 14:08:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:33.372 14:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.372 14:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:33.372 14:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.372 14:08:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:33.372 14:08:13 -- nvmf/common.sh@717 -- # local ip 00:25:33.372 14:08:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:33.372 14:08:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:33.372 14:08:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.372 14:08:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.372 14:08:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:33.632 14:08:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.632 14:08:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:33.632 14:08:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:33.632 14:08:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:33.632 14:08:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.632 14:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.632 14:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:33.922 nvme0n1 00:25:33.922 14:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.922 14:08:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.922 14:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.922 14:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:33.922 14:08:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:33.922 14:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:33.922 14:08:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.922 14:08:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.922 14:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:33.922 14:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:34.180 14:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.180 14:08:13 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:25:34.180 14:08:13 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.180 14:08:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:34.180 14:08:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:34.180 14:08:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:34.180 14:08:13 -- host/auth.sh@44 -- # digest=sha384 00:25:34.180 14:08:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.180 14:08:13 -- host/auth.sh@44 -- # keyid=0 00:25:34.180 14:08:13 -- host/auth.sh@45 -- # key=DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:34.180 14:08:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:34.180 14:08:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:34.180 14:08:13 -- host/auth.sh@49 -- # echo DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:34.180 14:08:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:25:34.180 14:08:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:34.180 14:08:13 -- host/auth.sh@68 -- # digest=sha384 00:25:34.180 14:08:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:34.180 14:08:13 -- host/auth.sh@68 -- # keyid=0 00:25:34.180 14:08:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:34.180 14:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.180 14:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:34.180 14:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.180 14:08:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:34.180 14:08:13 -- nvmf/common.sh@717 -- # local ip 00:25:34.180 14:08:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:34.180 14:08:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:34.180 14:08:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.180 14:08:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.180 14:08:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:34.180 14:08:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.180 14:08:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:34.180 14:08:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:34.180 14:08:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:34.180 14:08:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:34.180 14:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.180 14:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:34.180 nvme0n1 00:25:34.181 14:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.181 14:08:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.181 14:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.181 14:08:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:34.181 14:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:34.181 14:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.181 14:08:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.181 14:08:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.181 14:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.181 14:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:34.181 14:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.181 14:08:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:34.181 14:08:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:34.181 14:08:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:34.181 14:08:13 -- host/auth.sh@44 -- # digest=sha384 00:25:34.181 14:08:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.181 14:08:13 -- host/auth.sh@44 -- # keyid=1 00:25:34.181 14:08:13 -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:34.181 14:08:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:34.181 14:08:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:34.181 14:08:13 -- host/auth.sh@49 -- # echo DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:34.181 14:08:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:25:34.181 14:08:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:34.181 14:08:13 -- host/auth.sh@68 -- # digest=sha384 00:25:34.181 14:08:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:34.181 14:08:13 -- host/auth.sh@68 -- # keyid=1 00:25:34.181 14:08:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:34.181 14:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.181 14:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:34.181 14:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.181 14:08:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:34.181 14:08:13 -- nvmf/common.sh@717 -- # local ip 00:25:34.181 14:08:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:34.181 14:08:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:34.181 14:08:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.181 14:08:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.181 14:08:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:34.181 14:08:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.181 14:08:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:34.181 14:08:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:34.181 14:08:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:34.181 14:08:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:34.181 14:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.181 14:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:34.440 nvme0n1 00:25:34.440 14:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.440 14:08:13 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.440 14:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.440 14:08:13 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:34.440 14:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:34.440 14:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.440 14:08:13 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.440 14:08:13 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.440 14:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.441 14:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:34.441 14:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.441 14:08:13 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:34.441 14:08:13 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:34.441 14:08:13 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:34.441 14:08:13 -- host/auth.sh@44 -- # digest=sha384 00:25:34.441 14:08:13 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.441 14:08:13 -- host/auth.sh@44 -- # keyid=2 00:25:34.441 14:08:13 -- host/auth.sh@45 -- # key=DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:34.441 14:08:13 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:34.441 14:08:13 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:34.441 14:08:13 -- host/auth.sh@49 -- # echo DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:34.441 14:08:13 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:25:34.441 14:08:13 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:34.441 14:08:13 -- host/auth.sh@68 -- # digest=sha384 00:25:34.441 14:08:13 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:34.441 14:08:13 -- host/auth.sh@68 -- # keyid=2 00:25:34.441 14:08:13 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:34.441 14:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.441 14:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:34.441 14:08:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.441 14:08:13 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:34.441 14:08:13 -- nvmf/common.sh@717 -- # local ip 00:25:34.441 14:08:13 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:34.441 14:08:13 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:34.441 14:08:13 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.441 14:08:13 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.441 14:08:13 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:34.441 14:08:13 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.441 14:08:13 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:34.441 14:08:13 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:34.441 14:08:13 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:34.441 14:08:13 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:34.441 14:08:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.441 14:08:13 -- common/autotest_common.sh@10 -- # set +x 00:25:34.441 nvme0n1 00:25:34.441 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.441 14:08:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.441 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.441 14:08:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:34.441 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:34.441 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.441 14:08:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.441 14:08:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.441 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.441 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:34.441 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.441 14:08:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:34.701 14:08:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:34.701 14:08:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:34.701 14:08:14 -- host/auth.sh@44 -- # digest=sha384 00:25:34.701 14:08:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.701 14:08:14 -- host/auth.sh@44 -- # keyid=3 00:25:34.701 14:08:14 -- host/auth.sh@45 -- # key=DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:34.701 14:08:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:34.701 14:08:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:34.701 14:08:14 -- host/auth.sh@49 -- # echo DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:34.701 14:08:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:25:34.701 14:08:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:34.701 14:08:14 -- host/auth.sh@68 -- # digest=sha384 00:25:34.701 14:08:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:34.701 14:08:14 -- host/auth.sh@68 -- # keyid=3 00:25:34.701 14:08:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:34.701 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.701 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:34.701 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.701 14:08:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:34.701 14:08:14 -- nvmf/common.sh@717 -- # local ip 00:25:34.701 14:08:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:34.701 14:08:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:34.701 14:08:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.701 14:08:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.701 14:08:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:34.701 14:08:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.701 14:08:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:34.701 14:08:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:34.701 14:08:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:34.701 14:08:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:34.701 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.701 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:34.701 nvme0n1 00:25:34.701 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.701 14:08:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.701 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.701 14:08:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:34.701 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:34.701 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.701 14:08:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.701 14:08:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.701 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.701 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:34.701 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.701 14:08:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:34.701 14:08:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:34.701 14:08:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:34.701 14:08:14 -- host/auth.sh@44 -- # digest=sha384 00:25:34.701 14:08:14 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:34.701 14:08:14 -- host/auth.sh@44 -- # keyid=4 00:25:34.701 14:08:14 -- host/auth.sh@45 -- # key=DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:34.701 14:08:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:34.701 14:08:14 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:34.701 14:08:14 -- host/auth.sh@49 -- # echo DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:34.701 14:08:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:25:34.701 14:08:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:34.701 14:08:14 -- host/auth.sh@68 -- # digest=sha384 00:25:34.701 14:08:14 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:34.701 14:08:14 -- host/auth.sh@68 -- # keyid=4 00:25:34.701 14:08:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:34.701 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.701 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:34.701 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.701 14:08:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:34.701 14:08:14 -- nvmf/common.sh@717 -- # local ip 00:25:34.701 14:08:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:34.701 14:08:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:34.701 14:08:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.701 14:08:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.701 14:08:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:34.701 14:08:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.701 14:08:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:34.701 14:08:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:34.701 14:08:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:34.701 14:08:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.701 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.701 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:34.959 nvme0n1 00:25:34.959 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.959 14:08:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.959 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.959 14:08:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:34.959 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:34.959 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.959 14:08:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.959 14:08:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.959 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.959 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:34.959 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.959 14:08:14 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.959 14:08:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:34.959 14:08:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:34.959 14:08:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:34.959 14:08:14 -- host/auth.sh@44 -- # digest=sha384 00:25:34.959 14:08:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.959 14:08:14 -- host/auth.sh@44 -- # keyid=0 00:25:34.959 14:08:14 -- host/auth.sh@45 -- # key=DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:34.959 14:08:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:34.959 14:08:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:34.959 14:08:14 -- host/auth.sh@49 -- # echo DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:34.959 14:08:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:25:34.959 14:08:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:34.959 14:08:14 -- host/auth.sh@68 -- # digest=sha384 00:25:34.959 14:08:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:34.959 14:08:14 -- host/auth.sh@68 -- # keyid=0 00:25:34.959 14:08:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:34.959 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.959 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:34.959 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.959 14:08:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:34.959 14:08:14 -- nvmf/common.sh@717 -- # local ip 00:25:34.959 14:08:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:34.959 14:08:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:34.959 14:08:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.959 14:08:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.959 14:08:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:34.959 14:08:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.959 14:08:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:34.959 14:08:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:34.959 14:08:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:34.959 14:08:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:34.959 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.959 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:34.959 nvme0n1 00:25:34.959 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:34.959 14:08:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.959 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:34.959 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:34.959 14:08:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:34.959 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.235 14:08:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.235 14:08:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.235 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.235 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:35.235 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.235 14:08:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:35.235 14:08:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:35.235 14:08:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:35.235 14:08:14 -- host/auth.sh@44 -- # digest=sha384 00:25:35.235 14:08:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.235 14:08:14 -- host/auth.sh@44 -- # keyid=1 00:25:35.235 14:08:14 -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:35.235 14:08:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:35.235 14:08:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:35.235 14:08:14 -- host/auth.sh@49 -- # echo DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:35.235 14:08:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:25:35.235 14:08:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:35.235 14:08:14 -- host/auth.sh@68 -- # digest=sha384 00:25:35.235 14:08:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:35.235 14:08:14 -- host/auth.sh@68 -- # keyid=1 00:25:35.235 14:08:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:35.235 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.235 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:35.235 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.235 14:08:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:35.235 14:08:14 -- nvmf/common.sh@717 -- # local ip 00:25:35.235 14:08:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:35.235 14:08:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:35.235 14:08:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.235 14:08:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.235 14:08:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:35.235 14:08:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.235 14:08:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:35.235 14:08:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:35.235 14:08:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:35.235 14:08:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:35.235 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.235 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:35.235 nvme0n1 00:25:35.235 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.235 14:08:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.235 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.235 14:08:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:35.235 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:35.235 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.235 14:08:14 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.235 14:08:14 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.235 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.235 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:35.235 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.235 14:08:14 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:35.235 14:08:14 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:35.235 14:08:14 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:35.235 14:08:14 -- host/auth.sh@44 -- # digest=sha384 00:25:35.235 14:08:14 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.235 14:08:14 -- host/auth.sh@44 -- # keyid=2 00:25:35.235 14:08:14 -- host/auth.sh@45 -- # key=DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:35.235 14:08:14 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:35.235 14:08:14 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:35.235 14:08:14 -- host/auth.sh@49 -- # echo DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:35.235 14:08:14 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:25:35.235 14:08:14 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:35.235 14:08:14 -- host/auth.sh@68 -- # digest=sha384 00:25:35.235 14:08:14 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:35.235 14:08:14 -- host/auth.sh@68 -- # keyid=2 00:25:35.235 14:08:14 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:35.235 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.235 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:35.235 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.235 14:08:14 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:35.235 14:08:14 -- nvmf/common.sh@717 -- # local ip 00:25:35.235 14:08:14 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:35.235 14:08:14 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:35.235 14:08:14 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.235 14:08:14 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.235 14:08:14 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:35.235 14:08:14 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.235 14:08:14 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:35.235 14:08:14 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:35.235 14:08:14 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:35.235 14:08:14 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:35.235 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.235 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:35.550 nvme0n1 00:25:35.550 14:08:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.550 14:08:14 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.550 14:08:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.550 14:08:14 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:35.550 14:08:14 -- common/autotest_common.sh@10 -- # set +x 00:25:35.550 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.550 14:08:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.550 14:08:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.550 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.550 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:35.550 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.550 14:08:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:35.550 14:08:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:35.550 14:08:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:35.550 14:08:15 -- host/auth.sh@44 -- # digest=sha384 00:25:35.550 14:08:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.550 14:08:15 -- host/auth.sh@44 -- # keyid=3 00:25:35.550 14:08:15 -- host/auth.sh@45 -- # key=DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:35.550 14:08:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:35.550 14:08:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:35.550 14:08:15 -- host/auth.sh@49 -- # echo DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:35.550 14:08:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:25:35.550 14:08:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:35.550 14:08:15 -- host/auth.sh@68 -- # digest=sha384 00:25:35.550 14:08:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:35.550 14:08:15 -- host/auth.sh@68 -- # keyid=3 00:25:35.550 14:08:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:35.550 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.550 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:35.550 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.550 14:08:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:35.550 14:08:15 -- nvmf/common.sh@717 -- # local ip 00:25:35.550 14:08:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:35.550 14:08:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:35.550 14:08:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.550 14:08:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.550 14:08:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:35.550 14:08:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.550 14:08:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:35.550 14:08:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:35.550 14:08:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:35.550 14:08:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:35.550 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.550 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:35.550 nvme0n1 00:25:35.550 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.550 14:08:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.550 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.550 14:08:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:35.550 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:35.550 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.810 14:08:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.810 14:08:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.810 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.810 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:35.810 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.810 14:08:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:35.810 14:08:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:35.810 14:08:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:35.810 14:08:15 -- host/auth.sh@44 -- # digest=sha384 00:25:35.810 14:08:15 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:35.810 14:08:15 -- host/auth.sh@44 -- # keyid=4 00:25:35.810 14:08:15 -- host/auth.sh@45 -- # key=DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:35.810 14:08:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:35.810 14:08:15 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:35.810 14:08:15 -- host/auth.sh@49 -- # echo DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:35.810 14:08:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:25:35.810 14:08:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:35.810 14:08:15 -- host/auth.sh@68 -- # digest=sha384 00:25:35.810 14:08:15 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:35.810 14:08:15 -- host/auth.sh@68 -- # keyid=4 00:25:35.810 14:08:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:35.810 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.810 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:35.810 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.810 14:08:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:35.810 14:08:15 -- nvmf/common.sh@717 -- # local ip 00:25:35.810 14:08:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:35.810 14:08:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:35.810 14:08:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.810 14:08:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.810 14:08:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:35.810 14:08:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.810 14:08:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:35.810 14:08:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:35.810 14:08:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:35.810 14:08:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:35.810 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.810 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:35.810 nvme0n1 00:25:35.810 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.810 14:08:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.810 14:08:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:35.810 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.810 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:35.810 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.810 14:08:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.810 14:08:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.810 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.810 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:35.810 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.810 14:08:15 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.810 14:08:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:35.810 14:08:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:35.810 14:08:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:35.810 14:08:15 -- host/auth.sh@44 -- # digest=sha384 00:25:35.810 14:08:15 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.810 14:08:15 -- host/auth.sh@44 -- # keyid=0 00:25:35.810 14:08:15 -- host/auth.sh@45 -- # key=DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:35.810 14:08:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:35.810 14:08:15 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:35.810 14:08:15 -- host/auth.sh@49 -- # echo DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:35.810 14:08:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:25:35.810 14:08:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:35.810 14:08:15 -- host/auth.sh@68 -- # digest=sha384 00:25:35.810 14:08:15 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:35.810 14:08:15 -- host/auth.sh@68 -- # keyid=0 00:25:35.810 14:08:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:35.810 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.810 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:35.810 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:35.810 14:08:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:35.810 14:08:15 -- nvmf/common.sh@717 -- # local ip 00:25:35.810 14:08:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:35.810 14:08:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:35.810 14:08:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.810 14:08:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.810 14:08:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:35.810 14:08:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.810 14:08:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:35.810 14:08:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:35.810 14:08:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:35.810 14:08:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:35.810 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:35.810 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:36.069 nvme0n1 00:25:36.069 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.069 14:08:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.069 14:08:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:36.069 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.069 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:36.069 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.070 14:08:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.070 14:08:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.070 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.070 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:36.070 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.070 14:08:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:36.070 14:08:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:36.070 14:08:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:36.070 14:08:15 -- host/auth.sh@44 -- # digest=sha384 00:25:36.070 14:08:15 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.070 14:08:15 -- host/auth.sh@44 -- # keyid=1 00:25:36.070 14:08:15 -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:36.070 14:08:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:36.070 14:08:15 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:36.070 14:08:15 -- host/auth.sh@49 -- # echo DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:36.070 14:08:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:25:36.070 14:08:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:36.070 14:08:15 -- host/auth.sh@68 -- # digest=sha384 00:25:36.070 14:08:15 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:36.070 14:08:15 -- host/auth.sh@68 -- # keyid=1 00:25:36.070 14:08:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:36.070 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.070 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:36.070 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.070 14:08:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:36.070 14:08:15 -- nvmf/common.sh@717 -- # local ip 00:25:36.070 14:08:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:36.070 14:08:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:36.070 14:08:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.070 14:08:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.070 14:08:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:36.070 14:08:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.070 14:08:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:36.070 14:08:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:36.070 14:08:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:36.070 14:08:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:36.070 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.070 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:36.329 nvme0n1 00:25:36.329 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.329 14:08:15 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.329 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.329 14:08:15 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:36.329 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:36.329 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.329 14:08:15 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.329 14:08:15 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.329 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.329 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:36.329 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.329 14:08:15 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:36.329 14:08:15 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:36.329 14:08:15 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:36.329 14:08:15 -- host/auth.sh@44 -- # digest=sha384 00:25:36.329 14:08:15 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.329 14:08:15 -- host/auth.sh@44 -- # keyid=2 00:25:36.329 14:08:15 -- host/auth.sh@45 -- # key=DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:36.329 14:08:15 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:36.329 14:08:15 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:36.329 14:08:15 -- host/auth.sh@49 -- # echo DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:36.329 14:08:15 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:25:36.329 14:08:15 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:36.329 14:08:15 -- host/auth.sh@68 -- # digest=sha384 00:25:36.329 14:08:15 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:36.329 14:08:15 -- host/auth.sh@68 -- # keyid=2 00:25:36.329 14:08:15 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:36.330 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.330 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:36.330 14:08:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.330 14:08:15 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:36.330 14:08:15 -- nvmf/common.sh@717 -- # local ip 00:25:36.330 14:08:15 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:36.330 14:08:15 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:36.330 14:08:15 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.330 14:08:15 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.330 14:08:15 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:36.330 14:08:15 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.330 14:08:15 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:36.330 14:08:15 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:36.330 14:08:15 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:36.330 14:08:15 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:36.330 14:08:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.330 14:08:15 -- common/autotest_common.sh@10 -- # set +x 00:25:36.589 nvme0n1 00:25:36.589 14:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.589 14:08:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.589 14:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.589 14:08:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:36.589 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:25:36.589 14:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.589 14:08:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.589 14:08:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.589 14:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.589 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:25:36.589 14:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.589 14:08:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:36.589 14:08:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:36.589 14:08:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:36.589 14:08:16 -- host/auth.sh@44 -- # digest=sha384 00:25:36.589 14:08:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.589 14:08:16 -- host/auth.sh@44 -- # keyid=3 00:25:36.589 14:08:16 -- host/auth.sh@45 -- # key=DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:36.589 14:08:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:36.589 14:08:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:36.589 14:08:16 -- host/auth.sh@49 -- # echo DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:36.589 14:08:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:25:36.589 14:08:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:36.589 14:08:16 -- host/auth.sh@68 -- # digest=sha384 00:25:36.589 14:08:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:36.589 14:08:16 -- host/auth.sh@68 -- # keyid=3 00:25:36.589 14:08:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:36.589 14:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.589 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:25:36.589 14:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.589 14:08:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:36.589 14:08:16 -- nvmf/common.sh@717 -- # local ip 00:25:36.589 14:08:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:36.589 14:08:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:36.589 14:08:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.589 14:08:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.589 14:08:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:36.589 14:08:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.589 14:08:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:36.589 14:08:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:36.589 14:08:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:36.589 14:08:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:36.589 14:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.589 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:25:36.848 nvme0n1 00:25:36.848 14:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.848 14:08:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.848 14:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.848 14:08:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:36.848 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:25:36.848 14:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.848 14:08:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.848 14:08:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.848 14:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.848 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:25:36.848 14:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.848 14:08:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:36.848 14:08:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:36.848 14:08:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:36.848 14:08:16 -- host/auth.sh@44 -- # digest=sha384 00:25:36.848 14:08:16 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:36.848 14:08:16 -- host/auth.sh@44 -- # keyid=4 00:25:36.848 14:08:16 -- host/auth.sh@45 -- # key=DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:36.848 14:08:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:36.848 14:08:16 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:36.848 14:08:16 -- host/auth.sh@49 -- # echo DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:36.848 14:08:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:25:36.848 14:08:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:36.848 14:08:16 -- host/auth.sh@68 -- # digest=sha384 00:25:36.848 14:08:16 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:36.848 14:08:16 -- host/auth.sh@68 -- # keyid=4 00:25:36.848 14:08:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:36.848 14:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.848 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:25:36.848 14:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:36.848 14:08:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:36.848 14:08:16 -- nvmf/common.sh@717 -- # local ip 00:25:36.848 14:08:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:36.848 14:08:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:36.848 14:08:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.848 14:08:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.848 14:08:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:36.848 14:08:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.848 14:08:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:36.848 14:08:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:36.848 14:08:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:36.848 14:08:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:36.848 14:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:36.848 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:25:37.107 nvme0n1 00:25:37.107 14:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.107 14:08:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.107 14:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.107 14:08:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:37.107 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:25:37.107 14:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.107 14:08:16 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.107 14:08:16 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.107 14:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.107 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:25:37.107 14:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.107 14:08:16 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.107 14:08:16 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:37.107 14:08:16 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:37.107 14:08:16 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:37.107 14:08:16 -- host/auth.sh@44 -- # digest=sha384 00:25:37.107 14:08:16 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.107 14:08:16 -- host/auth.sh@44 -- # keyid=0 00:25:37.107 14:08:16 -- host/auth.sh@45 -- # key=DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:37.107 14:08:16 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:37.107 14:08:16 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:37.107 14:08:16 -- host/auth.sh@49 -- # echo DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:37.107 14:08:16 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:25:37.107 14:08:16 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:37.107 14:08:16 -- host/auth.sh@68 -- # digest=sha384 00:25:37.107 14:08:16 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:37.107 14:08:16 -- host/auth.sh@68 -- # keyid=0 00:25:37.107 14:08:16 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:37.107 14:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.107 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:25:37.107 14:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.107 14:08:16 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:37.107 14:08:16 -- nvmf/common.sh@717 -- # local ip 00:25:37.107 14:08:16 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:37.107 14:08:16 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:37.107 14:08:16 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.107 14:08:16 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.107 14:08:16 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:37.107 14:08:16 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.107 14:08:16 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:37.107 14:08:16 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:37.107 14:08:16 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:37.107 14:08:16 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:37.107 14:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.107 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:25:37.365 nvme0n1 00:25:37.365 14:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.365 14:08:16 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.365 14:08:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.365 14:08:16 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:37.365 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:25:37.365 14:08:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.365 14:08:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.365 14:08:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.365 14:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.365 14:08:17 -- common/autotest_common.sh@10 -- # set +x 00:25:37.365 14:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.365 14:08:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:37.365 14:08:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:37.365 14:08:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:37.365 14:08:17 -- host/auth.sh@44 -- # digest=sha384 00:25:37.365 14:08:17 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.622 14:08:17 -- host/auth.sh@44 -- # keyid=1 00:25:37.622 14:08:17 -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:37.622 14:08:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:37.622 14:08:17 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:37.622 14:08:17 -- host/auth.sh@49 -- # echo DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:37.622 14:08:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:25:37.622 14:08:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:37.622 14:08:17 -- host/auth.sh@68 -- # digest=sha384 00:25:37.622 14:08:17 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:37.622 14:08:17 -- host/auth.sh@68 -- # keyid=1 00:25:37.622 14:08:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:37.623 14:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.623 14:08:17 -- common/autotest_common.sh@10 -- # set +x 00:25:37.623 14:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.623 14:08:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:37.623 14:08:17 -- nvmf/common.sh@717 -- # local ip 00:25:37.623 14:08:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:37.623 14:08:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:37.623 14:08:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.623 14:08:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.623 14:08:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:37.623 14:08:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.623 14:08:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:37.623 14:08:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:37.623 14:08:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:37.623 14:08:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:37.623 14:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.623 14:08:17 -- common/autotest_common.sh@10 -- # set +x 00:25:37.881 nvme0n1 00:25:37.881 14:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.881 14:08:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.881 14:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.881 14:08:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:37.881 14:08:17 -- common/autotest_common.sh@10 -- # set +x 00:25:37.881 14:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.881 14:08:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.881 14:08:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.881 14:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.881 14:08:17 -- common/autotest_common.sh@10 -- # set +x 00:25:37.881 14:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.881 14:08:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:37.881 14:08:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:37.881 14:08:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:37.881 14:08:17 -- host/auth.sh@44 -- # digest=sha384 00:25:37.881 14:08:17 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.881 14:08:17 -- host/auth.sh@44 -- # keyid=2 00:25:37.881 14:08:17 -- host/auth.sh@45 -- # key=DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:37.881 14:08:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:37.881 14:08:17 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:37.881 14:08:17 -- host/auth.sh@49 -- # echo DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:37.881 14:08:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:25:37.881 14:08:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:37.881 14:08:17 -- host/auth.sh@68 -- # digest=sha384 00:25:37.881 14:08:17 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:37.881 14:08:17 -- host/auth.sh@68 -- # keyid=2 00:25:37.881 14:08:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:37.881 14:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.881 14:08:17 -- common/autotest_common.sh@10 -- # set +x 00:25:37.881 14:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:37.881 14:08:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:37.881 14:08:17 -- nvmf/common.sh@717 -- # local ip 00:25:37.881 14:08:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:37.881 14:08:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:37.881 14:08:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.881 14:08:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.881 14:08:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:37.881 14:08:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.881 14:08:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:37.881 14:08:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:37.881 14:08:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:37.881 14:08:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:37.881 14:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:37.881 14:08:17 -- common/autotest_common.sh@10 -- # set +x 00:25:38.139 nvme0n1 00:25:38.139 14:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.139 14:08:17 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.139 14:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.139 14:08:17 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:38.139 14:08:17 -- common/autotest_common.sh@10 -- # set +x 00:25:38.139 14:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.139 14:08:17 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.139 14:08:17 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.139 14:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.139 14:08:17 -- common/autotest_common.sh@10 -- # set +x 00:25:38.139 14:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.139 14:08:17 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:38.139 14:08:17 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:38.139 14:08:17 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:38.139 14:08:17 -- host/auth.sh@44 -- # digest=sha384 00:25:38.139 14:08:17 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:38.139 14:08:17 -- host/auth.sh@44 -- # keyid=3 00:25:38.139 14:08:17 -- host/auth.sh@45 -- # key=DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:38.139 14:08:17 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:38.139 14:08:17 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:38.139 14:08:17 -- host/auth.sh@49 -- # echo DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:38.139 14:08:17 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:25:38.139 14:08:17 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:38.139 14:08:17 -- host/auth.sh@68 -- # digest=sha384 00:25:38.139 14:08:17 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:38.139 14:08:17 -- host/auth.sh@68 -- # keyid=3 00:25:38.139 14:08:17 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:38.139 14:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.139 14:08:17 -- common/autotest_common.sh@10 -- # set +x 00:25:38.139 14:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.139 14:08:17 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:38.139 14:08:17 -- nvmf/common.sh@717 -- # local ip 00:25:38.139 14:08:17 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:38.139 14:08:17 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:38.139 14:08:17 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.139 14:08:17 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.139 14:08:17 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:38.140 14:08:17 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.140 14:08:17 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:38.140 14:08:17 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:38.140 14:08:17 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:38.140 14:08:17 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:38.140 14:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.140 14:08:17 -- common/autotest_common.sh@10 -- # set +x 00:25:38.707 nvme0n1 00:25:38.707 14:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.707 14:08:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.707 14:08:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:38.707 14:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.707 14:08:18 -- common/autotest_common.sh@10 -- # set +x 00:25:38.707 14:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.707 14:08:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.707 14:08:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.707 14:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.707 14:08:18 -- common/autotest_common.sh@10 -- # set +x 00:25:38.707 14:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.707 14:08:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:38.707 14:08:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:38.707 14:08:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:38.707 14:08:18 -- host/auth.sh@44 -- # digest=sha384 00:25:38.707 14:08:18 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:38.707 14:08:18 -- host/auth.sh@44 -- # keyid=4 00:25:38.707 14:08:18 -- host/auth.sh@45 -- # key=DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:38.707 14:08:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:38.707 14:08:18 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:38.707 14:08:18 -- host/auth.sh@49 -- # echo DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:38.707 14:08:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:25:38.707 14:08:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:38.707 14:08:18 -- host/auth.sh@68 -- # digest=sha384 00:25:38.707 14:08:18 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:38.707 14:08:18 -- host/auth.sh@68 -- # keyid=4 00:25:38.707 14:08:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:38.707 14:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.707 14:08:18 -- common/autotest_common.sh@10 -- # set +x 00:25:38.707 14:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.707 14:08:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:38.707 14:08:18 -- nvmf/common.sh@717 -- # local ip 00:25:38.707 14:08:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:38.707 14:08:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:38.707 14:08:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.707 14:08:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.707 14:08:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:38.707 14:08:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.707 14:08:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:38.707 14:08:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:38.707 14:08:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:38.707 14:08:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.707 14:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.707 14:08:18 -- common/autotest_common.sh@10 -- # set +x 00:25:38.966 nvme0n1 00:25:38.966 14:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.966 14:08:18 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.966 14:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.966 14:08:18 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:38.966 14:08:18 -- common/autotest_common.sh@10 -- # set +x 00:25:38.966 14:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.966 14:08:18 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.966 14:08:18 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.966 14:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.966 14:08:18 -- common/autotest_common.sh@10 -- # set +x 00:25:38.966 14:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.966 14:08:18 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:38.966 14:08:18 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:38.966 14:08:18 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:38.966 14:08:18 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:38.966 14:08:18 -- host/auth.sh@44 -- # digest=sha384 00:25:38.966 14:08:18 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.966 14:08:18 -- host/auth.sh@44 -- # keyid=0 00:25:38.966 14:08:18 -- host/auth.sh@45 -- # key=DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:38.966 14:08:18 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:38.967 14:08:18 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:38.967 14:08:18 -- host/auth.sh@49 -- # echo DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:38.967 14:08:18 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:25:38.967 14:08:18 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:38.967 14:08:18 -- host/auth.sh@68 -- # digest=sha384 00:25:38.967 14:08:18 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:38.967 14:08:18 -- host/auth.sh@68 -- # keyid=0 00:25:38.967 14:08:18 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:38.967 14:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.967 14:08:18 -- common/autotest_common.sh@10 -- # set +x 00:25:38.967 14:08:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:38.967 14:08:18 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:38.967 14:08:18 -- nvmf/common.sh@717 -- # local ip 00:25:38.967 14:08:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:38.967 14:08:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:38.967 14:08:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.967 14:08:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.967 14:08:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:38.967 14:08:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.967 14:08:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:38.967 14:08:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:38.967 14:08:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:38.967 14:08:18 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:38.967 14:08:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:38.967 14:08:18 -- common/autotest_common.sh@10 -- # set +x 00:25:39.635 nvme0n1 00:25:39.635 14:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:39.635 14:08:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.635 14:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:39.635 14:08:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:39.635 14:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:39.635 14:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:39.635 14:08:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.635 14:08:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.635 14:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:39.635 14:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:39.635 14:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:39.635 14:08:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:39.635 14:08:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:39.635 14:08:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:39.635 14:08:19 -- host/auth.sh@44 -- # digest=sha384 00:25:39.635 14:08:19 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.635 14:08:19 -- host/auth.sh@44 -- # keyid=1 00:25:39.635 14:08:19 -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:39.635 14:08:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:39.635 14:08:19 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:39.635 14:08:19 -- host/auth.sh@49 -- # echo DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:39.635 14:08:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:25:39.635 14:08:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:39.635 14:08:19 -- host/auth.sh@68 -- # digest=sha384 00:25:39.635 14:08:19 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:39.635 14:08:19 -- host/auth.sh@68 -- # keyid=1 00:25:39.635 14:08:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:39.635 14:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:39.635 14:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:39.635 14:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:39.635 14:08:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:39.635 14:08:19 -- nvmf/common.sh@717 -- # local ip 00:25:39.635 14:08:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:39.635 14:08:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:39.635 14:08:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.635 14:08:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.635 14:08:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:39.635 14:08:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.635 14:08:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:39.635 14:08:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:39.635 14:08:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:39.635 14:08:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:39.635 14:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:39.635 14:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:40.203 nvme0n1 00:25:40.203 14:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.203 14:08:19 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.203 14:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.203 14:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:40.203 14:08:19 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:40.203 14:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.203 14:08:19 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.203 14:08:19 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.203 14:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.203 14:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:40.203 14:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.203 14:08:19 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:40.203 14:08:19 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:40.203 14:08:19 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:40.203 14:08:19 -- host/auth.sh@44 -- # digest=sha384 00:25:40.203 14:08:19 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.203 14:08:19 -- host/auth.sh@44 -- # keyid=2 00:25:40.203 14:08:19 -- host/auth.sh@45 -- # key=DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:40.203 14:08:19 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:40.203 14:08:19 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:40.203 14:08:19 -- host/auth.sh@49 -- # echo DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:40.203 14:08:19 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:25:40.203 14:08:19 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:40.203 14:08:19 -- host/auth.sh@68 -- # digest=sha384 00:25:40.203 14:08:19 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:40.203 14:08:19 -- host/auth.sh@68 -- # keyid=2 00:25:40.203 14:08:19 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:40.203 14:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.203 14:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:40.203 14:08:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.203 14:08:19 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:40.203 14:08:19 -- nvmf/common.sh@717 -- # local ip 00:25:40.203 14:08:19 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:40.203 14:08:19 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:40.203 14:08:19 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.203 14:08:19 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.203 14:08:19 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:40.203 14:08:19 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.203 14:08:19 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:40.203 14:08:19 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:40.203 14:08:19 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:40.203 14:08:19 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:40.203 14:08:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.203 14:08:19 -- common/autotest_common.sh@10 -- # set +x 00:25:40.772 nvme0n1 00:25:40.772 14:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.772 14:08:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.772 14:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.772 14:08:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:40.772 14:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:40.772 14:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.772 14:08:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.772 14:08:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.772 14:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.772 14:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:40.772 14:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.772 14:08:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:40.772 14:08:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:40.772 14:08:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:40.772 14:08:20 -- host/auth.sh@44 -- # digest=sha384 00:25:40.772 14:08:20 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.772 14:08:20 -- host/auth.sh@44 -- # keyid=3 00:25:40.772 14:08:20 -- host/auth.sh@45 -- # key=DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:40.772 14:08:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:40.772 14:08:20 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:40.772 14:08:20 -- host/auth.sh@49 -- # echo DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:40.772 14:08:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:25:40.772 14:08:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:40.772 14:08:20 -- host/auth.sh@68 -- # digest=sha384 00:25:40.772 14:08:20 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:40.772 14:08:20 -- host/auth.sh@68 -- # keyid=3 00:25:40.772 14:08:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:40.772 14:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.772 14:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:40.772 14:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:40.772 14:08:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:40.772 14:08:20 -- nvmf/common.sh@717 -- # local ip 00:25:40.772 14:08:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:40.772 14:08:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:40.772 14:08:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.772 14:08:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.772 14:08:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:40.772 14:08:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.772 14:08:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:40.772 14:08:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:40.772 14:08:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:40.772 14:08:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:40.772 14:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:40.772 14:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:41.359 nvme0n1 00:25:41.359 14:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.359 14:08:20 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.359 14:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.359 14:08:20 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:41.359 14:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:41.359 14:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.359 14:08:20 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.359 14:08:20 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.359 14:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.359 14:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:41.359 14:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.359 14:08:20 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:41.359 14:08:20 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:41.359 14:08:20 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:41.359 14:08:20 -- host/auth.sh@44 -- # digest=sha384 00:25:41.359 14:08:20 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:41.359 14:08:20 -- host/auth.sh@44 -- # keyid=4 00:25:41.359 14:08:20 -- host/auth.sh@45 -- # key=DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:41.359 14:08:20 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:25:41.359 14:08:20 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:41.359 14:08:20 -- host/auth.sh@49 -- # echo DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:41.359 14:08:20 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:25:41.359 14:08:20 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:41.359 14:08:20 -- host/auth.sh@68 -- # digest=sha384 00:25:41.359 14:08:20 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:41.359 14:08:20 -- host/auth.sh@68 -- # keyid=4 00:25:41.359 14:08:20 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:41.359 14:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.359 14:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:41.359 14:08:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.359 14:08:20 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:41.359 14:08:20 -- nvmf/common.sh@717 -- # local ip 00:25:41.359 14:08:20 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:41.359 14:08:20 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:41.359 14:08:20 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.359 14:08:20 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.359 14:08:20 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:41.359 14:08:20 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.359 14:08:20 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:41.359 14:08:20 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:41.359 14:08:20 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:41.359 14:08:20 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.359 14:08:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.359 14:08:20 -- common/autotest_common.sh@10 -- # set +x 00:25:41.929 nvme0n1 00:25:41.929 14:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.929 14:08:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.929 14:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.929 14:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:41.929 14:08:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:41.929 14:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.929 14:08:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.929 14:08:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.929 14:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.929 14:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:41.929 14:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.929 14:08:21 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:25:41.929 14:08:21 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.929 14:08:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:41.929 14:08:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:41.929 14:08:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:41.929 14:08:21 -- host/auth.sh@44 -- # digest=sha512 00:25:41.929 14:08:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.929 14:08:21 -- host/auth.sh@44 -- # keyid=0 00:25:41.929 14:08:21 -- host/auth.sh@45 -- # key=DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:41.929 14:08:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:41.929 14:08:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:41.929 14:08:21 -- host/auth.sh@49 -- # echo DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:41.929 14:08:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:25:41.929 14:08:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:41.929 14:08:21 -- host/auth.sh@68 -- # digest=sha512 00:25:41.929 14:08:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:41.929 14:08:21 -- host/auth.sh@68 -- # keyid=0 00:25:41.929 14:08:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:41.929 14:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.929 14:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:41.929 14:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.929 14:08:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:41.929 14:08:21 -- nvmf/common.sh@717 -- # local ip 00:25:41.929 14:08:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:41.929 14:08:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:41.929 14:08:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.929 14:08:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.929 14:08:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:41.929 14:08:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.929 14:08:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:41.929 14:08:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:41.929 14:08:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:41.929 14:08:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:41.929 14:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.929 14:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:41.929 nvme0n1 00:25:41.929 14:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:41.929 14:08:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.929 14:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:41.929 14:08:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:41.929 14:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:42.189 14:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.189 14:08:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.189 14:08:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.189 14:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.189 14:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:42.189 14:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.189 14:08:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:42.189 14:08:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:42.189 14:08:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:42.189 14:08:21 -- host/auth.sh@44 -- # digest=sha512 00:25:42.189 14:08:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.189 14:08:21 -- host/auth.sh@44 -- # keyid=1 00:25:42.189 14:08:21 -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:42.189 14:08:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:42.189 14:08:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:42.189 14:08:21 -- host/auth.sh@49 -- # echo DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:42.189 14:08:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:25:42.189 14:08:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:42.189 14:08:21 -- host/auth.sh@68 -- # digest=sha512 00:25:42.189 14:08:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:42.189 14:08:21 -- host/auth.sh@68 -- # keyid=1 00:25:42.189 14:08:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:42.189 14:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.189 14:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:42.189 14:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.189 14:08:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:42.189 14:08:21 -- nvmf/common.sh@717 -- # local ip 00:25:42.189 14:08:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:42.189 14:08:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:42.189 14:08:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.189 14:08:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.189 14:08:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:42.189 14:08:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.189 14:08:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:42.189 14:08:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:42.189 14:08:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:42.189 14:08:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:42.189 14:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.189 14:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:42.189 nvme0n1 00:25:42.189 14:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.189 14:08:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.189 14:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.189 14:08:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:42.189 14:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:42.189 14:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.189 14:08:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.189 14:08:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.189 14:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.189 14:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:42.189 14:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.189 14:08:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:42.189 14:08:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:42.189 14:08:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:42.189 14:08:21 -- host/auth.sh@44 -- # digest=sha512 00:25:42.189 14:08:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.189 14:08:21 -- host/auth.sh@44 -- # keyid=2 00:25:42.189 14:08:21 -- host/auth.sh@45 -- # key=DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:42.189 14:08:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:42.189 14:08:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:42.189 14:08:21 -- host/auth.sh@49 -- # echo DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:42.190 14:08:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:25:42.190 14:08:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:42.190 14:08:21 -- host/auth.sh@68 -- # digest=sha512 00:25:42.190 14:08:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:42.190 14:08:21 -- host/auth.sh@68 -- # keyid=2 00:25:42.190 14:08:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:42.190 14:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.190 14:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:42.190 14:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.190 14:08:21 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:42.190 14:08:21 -- nvmf/common.sh@717 -- # local ip 00:25:42.190 14:08:21 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:42.190 14:08:21 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:42.190 14:08:21 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.190 14:08:21 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.190 14:08:21 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:42.190 14:08:21 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.190 14:08:21 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:42.190 14:08:21 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:42.190 14:08:21 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:42.190 14:08:21 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:42.190 14:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.190 14:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:42.515 nvme0n1 00:25:42.515 14:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.515 14:08:21 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.515 14:08:21 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:42.515 14:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.515 14:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:42.515 14:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.515 14:08:21 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.515 14:08:21 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.515 14:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.515 14:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:42.515 14:08:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.515 14:08:21 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:42.515 14:08:21 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:42.515 14:08:21 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:42.515 14:08:21 -- host/auth.sh@44 -- # digest=sha512 00:25:42.515 14:08:21 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.515 14:08:21 -- host/auth.sh@44 -- # keyid=3 00:25:42.515 14:08:21 -- host/auth.sh@45 -- # key=DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:42.515 14:08:21 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:42.515 14:08:21 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:42.515 14:08:21 -- host/auth.sh@49 -- # echo DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:42.515 14:08:21 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:25:42.515 14:08:21 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:42.515 14:08:21 -- host/auth.sh@68 -- # digest=sha512 00:25:42.515 14:08:21 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:42.515 14:08:21 -- host/auth.sh@68 -- # keyid=3 00:25:42.515 14:08:21 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:42.515 14:08:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.515 14:08:21 -- common/autotest_common.sh@10 -- # set +x 00:25:42.515 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.515 14:08:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:42.515 14:08:22 -- nvmf/common.sh@717 -- # local ip 00:25:42.515 14:08:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:42.515 14:08:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:42.515 14:08:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.515 14:08:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.515 14:08:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:42.515 14:08:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.515 14:08:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:42.515 14:08:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:42.515 14:08:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:42.515 14:08:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:42.515 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.515 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:42.515 nvme0n1 00:25:42.515 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.515 14:08:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.515 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.515 14:08:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:42.515 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:42.515 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.515 14:08:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.515 14:08:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.515 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.515 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:42.515 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.515 14:08:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:42.515 14:08:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:42.515 14:08:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:42.515 14:08:22 -- host/auth.sh@44 -- # digest=sha512 00:25:42.515 14:08:22 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.515 14:08:22 -- host/auth.sh@44 -- # keyid=4 00:25:42.515 14:08:22 -- host/auth.sh@45 -- # key=DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:42.515 14:08:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:42.515 14:08:22 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:42.515 14:08:22 -- host/auth.sh@49 -- # echo DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:42.515 14:08:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:25:42.515 14:08:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:42.515 14:08:22 -- host/auth.sh@68 -- # digest=sha512 00:25:42.515 14:08:22 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:25:42.515 14:08:22 -- host/auth.sh@68 -- # keyid=4 00:25:42.515 14:08:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:42.515 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.515 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:42.515 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.515 14:08:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:42.515 14:08:22 -- nvmf/common.sh@717 -- # local ip 00:25:42.515 14:08:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:42.515 14:08:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:42.515 14:08:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.515 14:08:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.515 14:08:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:42.515 14:08:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.515 14:08:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:42.515 14:08:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:42.515 14:08:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:42.515 14:08:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:42.515 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.515 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:42.774 nvme0n1 00:25:42.774 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.774 14:08:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:42.774 14:08:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.774 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.774 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:42.774 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.774 14:08:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.774 14:08:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.774 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.774 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:42.774 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.774 14:08:22 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.774 14:08:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:42.774 14:08:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:42.774 14:08:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:42.774 14:08:22 -- host/auth.sh@44 -- # digest=sha512 00:25:42.774 14:08:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:42.774 14:08:22 -- host/auth.sh@44 -- # keyid=0 00:25:42.774 14:08:22 -- host/auth.sh@45 -- # key=DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:42.774 14:08:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:42.774 14:08:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:42.774 14:08:22 -- host/auth.sh@49 -- # echo DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:42.774 14:08:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:25:42.774 14:08:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:42.774 14:08:22 -- host/auth.sh@68 -- # digest=sha512 00:25:42.774 14:08:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:42.774 14:08:22 -- host/auth.sh@68 -- # keyid=0 00:25:42.774 14:08:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:42.774 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.774 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:42.774 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.774 14:08:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:42.774 14:08:22 -- nvmf/common.sh@717 -- # local ip 00:25:42.774 14:08:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:42.774 14:08:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:42.774 14:08:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.774 14:08:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.774 14:08:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:42.774 14:08:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.774 14:08:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:42.774 14:08:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:42.774 14:08:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:42.774 14:08:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:42.774 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.774 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:42.774 nvme0n1 00:25:42.774 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:42.774 14:08:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:42.774 14:08:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.774 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:42.774 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:43.033 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.033 14:08:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.033 14:08:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.033 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.033 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:43.033 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.033 14:08:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:43.033 14:08:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:43.033 14:08:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:43.033 14:08:22 -- host/auth.sh@44 -- # digest=sha512 00:25:43.033 14:08:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.033 14:08:22 -- host/auth.sh@44 -- # keyid=1 00:25:43.033 14:08:22 -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:43.033 14:08:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:43.033 14:08:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:43.033 14:08:22 -- host/auth.sh@49 -- # echo DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:43.033 14:08:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:25:43.033 14:08:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:43.033 14:08:22 -- host/auth.sh@68 -- # digest=sha512 00:25:43.033 14:08:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:43.033 14:08:22 -- host/auth.sh@68 -- # keyid=1 00:25:43.033 14:08:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:43.033 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.033 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:43.033 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.033 14:08:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:43.033 14:08:22 -- nvmf/common.sh@717 -- # local ip 00:25:43.033 14:08:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:43.033 14:08:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:43.033 14:08:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.034 14:08:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.034 14:08:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:43.034 14:08:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.034 14:08:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:43.034 14:08:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:43.034 14:08:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:43.034 14:08:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:43.034 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.034 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:43.034 nvme0n1 00:25:43.034 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.034 14:08:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.034 14:08:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:43.034 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.034 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:43.034 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.034 14:08:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.034 14:08:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.034 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.034 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:43.034 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.034 14:08:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:43.034 14:08:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:43.034 14:08:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:43.034 14:08:22 -- host/auth.sh@44 -- # digest=sha512 00:25:43.034 14:08:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.034 14:08:22 -- host/auth.sh@44 -- # keyid=2 00:25:43.034 14:08:22 -- host/auth.sh@45 -- # key=DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:43.034 14:08:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:43.034 14:08:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:43.034 14:08:22 -- host/auth.sh@49 -- # echo DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:43.034 14:08:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:25:43.034 14:08:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:43.034 14:08:22 -- host/auth.sh@68 -- # digest=sha512 00:25:43.034 14:08:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:43.034 14:08:22 -- host/auth.sh@68 -- # keyid=2 00:25:43.034 14:08:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:43.034 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.034 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:43.034 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.034 14:08:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:43.034 14:08:22 -- nvmf/common.sh@717 -- # local ip 00:25:43.034 14:08:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:43.034 14:08:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:43.034 14:08:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.034 14:08:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.034 14:08:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:43.034 14:08:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.034 14:08:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:43.034 14:08:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:43.034 14:08:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:43.034 14:08:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:43.034 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.034 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:43.293 nvme0n1 00:25:43.293 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.293 14:08:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.293 14:08:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:43.293 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.293 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:43.293 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.293 14:08:22 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.293 14:08:22 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.293 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.293 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:43.293 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.294 14:08:22 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:43.294 14:08:22 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:43.294 14:08:22 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:43.294 14:08:22 -- host/auth.sh@44 -- # digest=sha512 00:25:43.294 14:08:22 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.294 14:08:22 -- host/auth.sh@44 -- # keyid=3 00:25:43.294 14:08:22 -- host/auth.sh@45 -- # key=DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:43.294 14:08:22 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:43.294 14:08:22 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:43.294 14:08:22 -- host/auth.sh@49 -- # echo DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:43.294 14:08:22 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:25:43.294 14:08:22 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:43.294 14:08:22 -- host/auth.sh@68 -- # digest=sha512 00:25:43.294 14:08:22 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:43.294 14:08:22 -- host/auth.sh@68 -- # keyid=3 00:25:43.294 14:08:22 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:43.294 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.294 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:43.294 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.294 14:08:22 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:43.294 14:08:22 -- nvmf/common.sh@717 -- # local ip 00:25:43.294 14:08:22 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:43.294 14:08:22 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:43.294 14:08:22 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.294 14:08:22 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.294 14:08:22 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:43.294 14:08:22 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.294 14:08:22 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:43.294 14:08:22 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:43.294 14:08:22 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:43.294 14:08:22 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:43.294 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.294 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:43.554 nvme0n1 00:25:43.554 14:08:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.554 14:08:22 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.554 14:08:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.554 14:08:22 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:43.554 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:25:43.554 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.554 14:08:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.554 14:08:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.554 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.554 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:43.554 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.554 14:08:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:43.554 14:08:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:43.554 14:08:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:43.554 14:08:23 -- host/auth.sh@44 -- # digest=sha512 00:25:43.554 14:08:23 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.554 14:08:23 -- host/auth.sh@44 -- # keyid=4 00:25:43.554 14:08:23 -- host/auth.sh@45 -- # key=DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:43.554 14:08:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:43.554 14:08:23 -- host/auth.sh@48 -- # echo ffdhe3072 00:25:43.554 14:08:23 -- host/auth.sh@49 -- # echo DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:43.554 14:08:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:25:43.554 14:08:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:43.554 14:08:23 -- host/auth.sh@68 -- # digest=sha512 00:25:43.554 14:08:23 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:25:43.554 14:08:23 -- host/auth.sh@68 -- # keyid=4 00:25:43.554 14:08:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:43.554 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.554 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:43.554 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.554 14:08:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:43.554 14:08:23 -- nvmf/common.sh@717 -- # local ip 00:25:43.554 14:08:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:43.554 14:08:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:43.554 14:08:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.554 14:08:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.554 14:08:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:43.554 14:08:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.554 14:08:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:43.554 14:08:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:43.554 14:08:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:43.554 14:08:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.554 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.554 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:43.554 nvme0n1 00:25:43.554 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.554 14:08:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.554 14:08:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:43.554 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.554 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:43.554 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.554 14:08:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.554 14:08:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.554 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.554 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:43.814 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.814 14:08:23 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.814 14:08:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:43.814 14:08:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:43.814 14:08:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:43.814 14:08:23 -- host/auth.sh@44 -- # digest=sha512 00:25:43.814 14:08:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:43.814 14:08:23 -- host/auth.sh@44 -- # keyid=0 00:25:43.814 14:08:23 -- host/auth.sh@45 -- # key=DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:43.814 14:08:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:43.814 14:08:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:43.814 14:08:23 -- host/auth.sh@49 -- # echo DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:43.814 14:08:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:25:43.814 14:08:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:43.814 14:08:23 -- host/auth.sh@68 -- # digest=sha512 00:25:43.814 14:08:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:43.814 14:08:23 -- host/auth.sh@68 -- # keyid=0 00:25:43.814 14:08:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:43.814 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.814 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:43.814 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.814 14:08:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:43.814 14:08:23 -- nvmf/common.sh@717 -- # local ip 00:25:43.814 14:08:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:43.814 14:08:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:43.814 14:08:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.814 14:08:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.814 14:08:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:43.814 14:08:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.814 14:08:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:43.814 14:08:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:43.814 14:08:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:43.814 14:08:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:43.814 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.814 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:43.814 nvme0n1 00:25:43.814 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.814 14:08:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.814 14:08:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:43.814 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.814 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:43.814 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.814 14:08:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.814 14:08:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.814 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.814 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:43.814 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:43.814 14:08:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:43.814 14:08:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:43.814 14:08:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:43.814 14:08:23 -- host/auth.sh@44 -- # digest=sha512 00:25:43.814 14:08:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:43.814 14:08:23 -- host/auth.sh@44 -- # keyid=1 00:25:43.814 14:08:23 -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:43.814 14:08:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:43.814 14:08:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:43.814 14:08:23 -- host/auth.sh@49 -- # echo DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:43.814 14:08:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:25:43.814 14:08:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:43.814 14:08:23 -- host/auth.sh@68 -- # digest=sha512 00:25:43.814 14:08:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:43.814 14:08:23 -- host/auth.sh@68 -- # keyid=1 00:25:43.814 14:08:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:43.814 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:43.814 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:44.073 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.073 14:08:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:44.073 14:08:23 -- nvmf/common.sh@717 -- # local ip 00:25:44.073 14:08:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:44.073 14:08:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:44.073 14:08:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.073 14:08:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.073 14:08:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:44.073 14:08:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.073 14:08:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:44.073 14:08:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:44.073 14:08:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:44.073 14:08:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:44.073 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.073 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:44.073 nvme0n1 00:25:44.073 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.073 14:08:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.073 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.073 14:08:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:44.073 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:44.073 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.073 14:08:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.073 14:08:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.073 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.073 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:44.073 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.073 14:08:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:44.073 14:08:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:44.073 14:08:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:44.073 14:08:23 -- host/auth.sh@44 -- # digest=sha512 00:25:44.073 14:08:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.073 14:08:23 -- host/auth.sh@44 -- # keyid=2 00:25:44.073 14:08:23 -- host/auth.sh@45 -- # key=DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:44.073 14:08:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:44.073 14:08:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:44.073 14:08:23 -- host/auth.sh@49 -- # echo DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:44.073 14:08:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:25:44.074 14:08:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:44.074 14:08:23 -- host/auth.sh@68 -- # digest=sha512 00:25:44.074 14:08:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:44.074 14:08:23 -- host/auth.sh@68 -- # keyid=2 00:25:44.074 14:08:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:44.074 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.074 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:44.074 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.074 14:08:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:44.074 14:08:23 -- nvmf/common.sh@717 -- # local ip 00:25:44.074 14:08:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:44.074 14:08:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:44.074 14:08:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.074 14:08:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.074 14:08:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:44.074 14:08:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.074 14:08:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:44.074 14:08:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:44.074 14:08:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:44.074 14:08:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:44.074 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.074 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:44.333 nvme0n1 00:25:44.333 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.333 14:08:23 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.333 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.333 14:08:23 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:44.333 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:44.333 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.333 14:08:23 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.333 14:08:23 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.333 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.333 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:44.333 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.333 14:08:23 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:44.333 14:08:23 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:44.333 14:08:23 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:44.333 14:08:23 -- host/auth.sh@44 -- # digest=sha512 00:25:44.333 14:08:23 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.333 14:08:23 -- host/auth.sh@44 -- # keyid=3 00:25:44.333 14:08:23 -- host/auth.sh@45 -- # key=DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:44.333 14:08:23 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:44.333 14:08:23 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:44.333 14:08:23 -- host/auth.sh@49 -- # echo DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:44.333 14:08:23 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:25:44.333 14:08:23 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:44.333 14:08:23 -- host/auth.sh@68 -- # digest=sha512 00:25:44.333 14:08:23 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:44.333 14:08:23 -- host/auth.sh@68 -- # keyid=3 00:25:44.333 14:08:23 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:44.333 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.333 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:44.333 14:08:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.333 14:08:23 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:44.333 14:08:23 -- nvmf/common.sh@717 -- # local ip 00:25:44.333 14:08:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:44.333 14:08:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:44.333 14:08:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.333 14:08:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.333 14:08:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:44.333 14:08:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.333 14:08:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:44.333 14:08:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:44.333 14:08:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:44.333 14:08:23 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:44.333 14:08:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.333 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:25:44.592 nvme0n1 00:25:44.592 14:08:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.592 14:08:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.592 14:08:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.592 14:08:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:44.592 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:25:44.592 14:08:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.592 14:08:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.592 14:08:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.592 14:08:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.592 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:25:44.592 14:08:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.592 14:08:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:44.592 14:08:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:44.592 14:08:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:44.592 14:08:24 -- host/auth.sh@44 -- # digest=sha512 00:25:44.592 14:08:24 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.592 14:08:24 -- host/auth.sh@44 -- # keyid=4 00:25:44.592 14:08:24 -- host/auth.sh@45 -- # key=DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:44.592 14:08:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:44.592 14:08:24 -- host/auth.sh@48 -- # echo ffdhe4096 00:25:44.592 14:08:24 -- host/auth.sh@49 -- # echo DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:44.592 14:08:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:25:44.592 14:08:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:44.592 14:08:24 -- host/auth.sh@68 -- # digest=sha512 00:25:44.592 14:08:24 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:25:44.592 14:08:24 -- host/auth.sh@68 -- # keyid=4 00:25:44.592 14:08:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:44.592 14:08:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.592 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:25:44.592 14:08:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.592 14:08:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:44.592 14:08:24 -- nvmf/common.sh@717 -- # local ip 00:25:44.592 14:08:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:44.592 14:08:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:44.592 14:08:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.592 14:08:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.592 14:08:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:44.592 14:08:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.592 14:08:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:44.592 14:08:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:44.592 14:08:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:44.592 14:08:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:44.592 14:08:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.592 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:25:44.870 nvme0n1 00:25:44.870 14:08:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.870 14:08:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.870 14:08:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:44.870 14:08:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.870 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:25:44.870 14:08:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.870 14:08:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.870 14:08:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.870 14:08:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.870 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:25:44.870 14:08:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.870 14:08:24 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:44.870 14:08:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:44.870 14:08:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:44.870 14:08:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:44.870 14:08:24 -- host/auth.sh@44 -- # digest=sha512 00:25:44.870 14:08:24 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:44.870 14:08:24 -- host/auth.sh@44 -- # keyid=0 00:25:44.870 14:08:24 -- host/auth.sh@45 -- # key=DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:44.870 14:08:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:44.870 14:08:24 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:44.870 14:08:24 -- host/auth.sh@49 -- # echo DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:44.870 14:08:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:25:44.870 14:08:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:44.870 14:08:24 -- host/auth.sh@68 -- # digest=sha512 00:25:44.870 14:08:24 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:44.870 14:08:24 -- host/auth.sh@68 -- # keyid=0 00:25:44.870 14:08:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:44.870 14:08:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.870 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:25:44.870 14:08:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:44.870 14:08:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:44.870 14:08:24 -- nvmf/common.sh@717 -- # local ip 00:25:44.871 14:08:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:44.871 14:08:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:44.871 14:08:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.871 14:08:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.871 14:08:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:44.871 14:08:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.871 14:08:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:44.871 14:08:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:44.871 14:08:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:44.871 14:08:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:44.871 14:08:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:44.871 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:25:45.138 nvme0n1 00:25:45.138 14:08:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.138 14:08:24 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.138 14:08:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.138 14:08:24 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:45.138 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:25:45.138 14:08:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.138 14:08:24 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.138 14:08:24 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.138 14:08:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.138 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:25:45.138 14:08:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.138 14:08:24 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:45.139 14:08:24 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:45.139 14:08:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:45.139 14:08:24 -- host/auth.sh@44 -- # digest=sha512 00:25:45.139 14:08:24 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.139 14:08:24 -- host/auth.sh@44 -- # keyid=1 00:25:45.139 14:08:24 -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:45.139 14:08:24 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:45.139 14:08:24 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:45.139 14:08:24 -- host/auth.sh@49 -- # echo DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:45.139 14:08:24 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:25:45.139 14:08:24 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:45.139 14:08:24 -- host/auth.sh@68 -- # digest=sha512 00:25:45.139 14:08:24 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:45.139 14:08:24 -- host/auth.sh@68 -- # keyid=1 00:25:45.139 14:08:24 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:45.139 14:08:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.139 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:25:45.139 14:08:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.458 14:08:24 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:45.458 14:08:24 -- nvmf/common.sh@717 -- # local ip 00:25:45.458 14:08:24 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:45.458 14:08:24 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:45.458 14:08:24 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.458 14:08:24 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.458 14:08:24 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:45.458 14:08:24 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.458 14:08:24 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:45.458 14:08:24 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:45.458 14:08:24 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:45.458 14:08:24 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:45.458 14:08:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.458 14:08:24 -- common/autotest_common.sh@10 -- # set +x 00:25:45.458 nvme0n1 00:25:45.458 14:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.458 14:08:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.458 14:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.458 14:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:45.458 14:08:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:45.458 14:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.716 14:08:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.716 14:08:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.716 14:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.716 14:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:45.716 14:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.716 14:08:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:45.716 14:08:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:45.716 14:08:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:45.716 14:08:25 -- host/auth.sh@44 -- # digest=sha512 00:25:45.716 14:08:25 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.716 14:08:25 -- host/auth.sh@44 -- # keyid=2 00:25:45.716 14:08:25 -- host/auth.sh@45 -- # key=DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:45.716 14:08:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:45.716 14:08:25 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:45.716 14:08:25 -- host/auth.sh@49 -- # echo DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:45.716 14:08:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:25:45.716 14:08:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:45.716 14:08:25 -- host/auth.sh@68 -- # digest=sha512 00:25:45.716 14:08:25 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:45.716 14:08:25 -- host/auth.sh@68 -- # keyid=2 00:25:45.716 14:08:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:45.716 14:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.716 14:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:45.716 14:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.716 14:08:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:45.716 14:08:25 -- nvmf/common.sh@717 -- # local ip 00:25:45.716 14:08:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:45.716 14:08:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:45.716 14:08:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.716 14:08:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.716 14:08:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:45.716 14:08:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.716 14:08:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:45.716 14:08:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:45.716 14:08:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:45.716 14:08:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:45.716 14:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.716 14:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:45.974 nvme0n1 00:25:45.974 14:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.974 14:08:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.974 14:08:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:45.974 14:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.974 14:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:45.974 14:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.974 14:08:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.974 14:08:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.974 14:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.974 14:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:45.974 14:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.974 14:08:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:45.974 14:08:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:45.974 14:08:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:45.975 14:08:25 -- host/auth.sh@44 -- # digest=sha512 00:25:45.975 14:08:25 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.975 14:08:25 -- host/auth.sh@44 -- # keyid=3 00:25:45.975 14:08:25 -- host/auth.sh@45 -- # key=DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:45.975 14:08:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:45.975 14:08:25 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:45.975 14:08:25 -- host/auth.sh@49 -- # echo DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:45.975 14:08:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:25:45.975 14:08:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:45.975 14:08:25 -- host/auth.sh@68 -- # digest=sha512 00:25:45.975 14:08:25 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:45.975 14:08:25 -- host/auth.sh@68 -- # keyid=3 00:25:45.975 14:08:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:45.975 14:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.975 14:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:45.975 14:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.975 14:08:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:45.975 14:08:25 -- nvmf/common.sh@717 -- # local ip 00:25:45.975 14:08:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:45.975 14:08:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:45.975 14:08:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.975 14:08:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.975 14:08:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:45.975 14:08:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.975 14:08:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:45.975 14:08:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:45.975 14:08:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:45.975 14:08:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:45.975 14:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.975 14:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:46.234 nvme0n1 00:25:46.234 14:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.234 14:08:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:46.234 14:08:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.234 14:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.234 14:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:46.234 14:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.234 14:08:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.234 14:08:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.234 14:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.234 14:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:46.234 14:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.234 14:08:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:46.234 14:08:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:46.234 14:08:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:46.234 14:08:25 -- host/auth.sh@44 -- # digest=sha512 00:25:46.234 14:08:25 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.234 14:08:25 -- host/auth.sh@44 -- # keyid=4 00:25:46.234 14:08:25 -- host/auth.sh@45 -- # key=DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:46.234 14:08:25 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:46.234 14:08:25 -- host/auth.sh@48 -- # echo ffdhe6144 00:25:46.234 14:08:25 -- host/auth.sh@49 -- # echo DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:46.234 14:08:25 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:25:46.234 14:08:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:46.234 14:08:25 -- host/auth.sh@68 -- # digest=sha512 00:25:46.234 14:08:25 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:25:46.234 14:08:25 -- host/auth.sh@68 -- # keyid=4 00:25:46.234 14:08:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:46.234 14:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.234 14:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:46.234 14:08:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.234 14:08:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:46.234 14:08:25 -- nvmf/common.sh@717 -- # local ip 00:25:46.234 14:08:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:46.234 14:08:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:46.234 14:08:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.234 14:08:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.492 14:08:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:46.492 14:08:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.492 14:08:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:46.492 14:08:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:46.492 14:08:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:46.492 14:08:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:46.492 14:08:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.492 14:08:25 -- common/autotest_common.sh@10 -- # set +x 00:25:46.752 nvme0n1 00:25:46.752 14:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.752 14:08:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.752 14:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.752 14:08:26 -- common/autotest_common.sh@10 -- # set +x 00:25:46.752 14:08:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:46.752 14:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.752 14:08:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.752 14:08:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.752 14:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.752 14:08:26 -- common/autotest_common.sh@10 -- # set +x 00:25:46.752 14:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.752 14:08:26 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:25:46.752 14:08:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:46.752 14:08:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:46.752 14:08:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:46.752 14:08:26 -- host/auth.sh@44 -- # digest=sha512 00:25:46.752 14:08:26 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:46.752 14:08:26 -- host/auth.sh@44 -- # keyid=0 00:25:46.752 14:08:26 -- host/auth.sh@45 -- # key=DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:46.752 14:08:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:46.752 14:08:26 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:46.752 14:08:26 -- host/auth.sh@49 -- # echo DHHC-1:00:NGI2YzZmNTFjYWRlYjRjMmE1OTE1YTgwOWU5Zjc4YTUEVEC5: 00:25:46.752 14:08:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:25:46.752 14:08:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:46.752 14:08:26 -- host/auth.sh@68 -- # digest=sha512 00:25:46.752 14:08:26 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:46.752 14:08:26 -- host/auth.sh@68 -- # keyid=0 00:25:46.752 14:08:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:46.752 14:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.752 14:08:26 -- common/autotest_common.sh@10 -- # set +x 00:25:46.752 14:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:46.752 14:08:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:46.752 14:08:26 -- nvmf/common.sh@717 -- # local ip 00:25:46.752 14:08:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:46.752 14:08:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:46.752 14:08:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.752 14:08:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.752 14:08:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:46.752 14:08:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.752 14:08:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:46.752 14:08:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:46.752 14:08:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:46.752 14:08:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:25:46.752 14:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:46.752 14:08:26 -- common/autotest_common.sh@10 -- # set +x 00:25:47.320 nvme0n1 00:25:47.320 14:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.320 14:08:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.320 14:08:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:47.320 14:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.320 14:08:26 -- common/autotest_common.sh@10 -- # set +x 00:25:47.320 14:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.320 14:08:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.320 14:08:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.320 14:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.320 14:08:26 -- common/autotest_common.sh@10 -- # set +x 00:25:47.320 14:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.320 14:08:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:47.320 14:08:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:47.320 14:08:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:47.320 14:08:26 -- host/auth.sh@44 -- # digest=sha512 00:25:47.320 14:08:26 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:47.320 14:08:26 -- host/auth.sh@44 -- # keyid=1 00:25:47.320 14:08:26 -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:47.320 14:08:26 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:47.320 14:08:26 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:47.320 14:08:26 -- host/auth.sh@49 -- # echo DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:47.320 14:08:26 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:25:47.320 14:08:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:47.320 14:08:26 -- host/auth.sh@68 -- # digest=sha512 00:25:47.320 14:08:26 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:47.320 14:08:26 -- host/auth.sh@68 -- # keyid=1 00:25:47.320 14:08:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:47.320 14:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.320 14:08:26 -- common/autotest_common.sh@10 -- # set +x 00:25:47.320 14:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.320 14:08:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:47.320 14:08:26 -- nvmf/common.sh@717 -- # local ip 00:25:47.320 14:08:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:47.320 14:08:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:47.320 14:08:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.320 14:08:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.320 14:08:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:47.320 14:08:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.320 14:08:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:47.320 14:08:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:47.320 14:08:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:47.320 14:08:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:25:47.320 14:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.320 14:08:26 -- common/autotest_common.sh@10 -- # set +x 00:25:47.888 nvme0n1 00:25:47.888 14:08:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.888 14:08:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:47.888 14:08:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.888 14:08:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.888 14:08:27 -- common/autotest_common.sh@10 -- # set +x 00:25:47.888 14:08:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.888 14:08:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.888 14:08:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.888 14:08:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.888 14:08:27 -- common/autotest_common.sh@10 -- # set +x 00:25:47.888 14:08:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.888 14:08:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:47.888 14:08:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:47.888 14:08:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:47.888 14:08:27 -- host/auth.sh@44 -- # digest=sha512 00:25:47.888 14:08:27 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:47.888 14:08:27 -- host/auth.sh@44 -- # keyid=2 00:25:47.888 14:08:27 -- host/auth.sh@45 -- # key=DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:47.888 14:08:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:47.888 14:08:27 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:47.888 14:08:27 -- host/auth.sh@49 -- # echo DHHC-1:01:NjAwNjgzZDA3NzAwZGQzYTE5MmE1MzNmMTZmNGE3ZDjkbRXu: 00:25:47.888 14:08:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:25:47.888 14:08:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:47.888 14:08:27 -- host/auth.sh@68 -- # digest=sha512 00:25:47.888 14:08:27 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:47.888 14:08:27 -- host/auth.sh@68 -- # keyid=2 00:25:47.888 14:08:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:47.888 14:08:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.888 14:08:27 -- common/autotest_common.sh@10 -- # set +x 00:25:47.888 14:08:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:47.888 14:08:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:47.888 14:08:27 -- nvmf/common.sh@717 -- # local ip 00:25:47.888 14:08:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:47.888 14:08:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:47.888 14:08:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.888 14:08:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.888 14:08:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:47.888 14:08:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.888 14:08:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:47.888 14:08:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:47.888 14:08:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:47.888 14:08:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:47.888 14:08:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:47.888 14:08:27 -- common/autotest_common.sh@10 -- # set +x 00:25:48.467 nvme0n1 00:25:48.467 14:08:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.467 14:08:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:48.468 14:08:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.468 14:08:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.468 14:08:27 -- common/autotest_common.sh@10 -- # set +x 00:25:48.468 14:08:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.468 14:08:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.468 14:08:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.468 14:08:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.468 14:08:27 -- common/autotest_common.sh@10 -- # set +x 00:25:48.468 14:08:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.468 14:08:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:48.468 14:08:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:48.468 14:08:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:48.468 14:08:27 -- host/auth.sh@44 -- # digest=sha512 00:25:48.468 14:08:27 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.468 14:08:27 -- host/auth.sh@44 -- # keyid=3 00:25:48.468 14:08:27 -- host/auth.sh@45 -- # key=DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:48.468 14:08:27 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:48.468 14:08:27 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:48.468 14:08:27 -- host/auth.sh@49 -- # echo DHHC-1:02:NzU2YTAyZGE3ZTBhYmI3NmE2ZDc4NWNjZjc5NmJkYTQ1MjQ5MzQ1OGRjM2RjM2Vk0l6HfQ==: 00:25:48.468 14:08:27 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:25:48.468 14:08:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:48.468 14:08:27 -- host/auth.sh@68 -- # digest=sha512 00:25:48.468 14:08:27 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:48.468 14:08:27 -- host/auth.sh@68 -- # keyid=3 00:25:48.468 14:08:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:48.468 14:08:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.468 14:08:27 -- common/autotest_common.sh@10 -- # set +x 00:25:48.468 14:08:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:48.468 14:08:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:48.468 14:08:27 -- nvmf/common.sh@717 -- # local ip 00:25:48.468 14:08:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:48.468 14:08:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:48.468 14:08:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.468 14:08:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.468 14:08:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:48.468 14:08:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.468 14:08:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:48.468 14:08:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:48.468 14:08:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:48.468 14:08:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:25:48.468 14:08:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:48.468 14:08:27 -- common/autotest_common.sh@10 -- # set +x 00:25:49.035 nvme0n1 00:25:49.035 14:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.035 14:08:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.035 14:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.035 14:08:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:49.035 14:08:28 -- common/autotest_common.sh@10 -- # set +x 00:25:49.035 14:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.035 14:08:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.035 14:08:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.035 14:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.035 14:08:28 -- common/autotest_common.sh@10 -- # set +x 00:25:49.035 14:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.035 14:08:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:25:49.035 14:08:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:49.035 14:08:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:49.035 14:08:28 -- host/auth.sh@44 -- # digest=sha512 00:25:49.035 14:08:28 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.035 14:08:28 -- host/auth.sh@44 -- # keyid=4 00:25:49.035 14:08:28 -- host/auth.sh@45 -- # key=DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:49.035 14:08:28 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:25:49.035 14:08:28 -- host/auth.sh@48 -- # echo ffdhe8192 00:25:49.035 14:08:28 -- host/auth.sh@49 -- # echo DHHC-1:03:Yzc5MGUzNjI3YTk3ZDM5NmRlMjhjNDBiOTVlYmY3MjI4ZWJhMDk5NDljMDczMDcxOTY3OWEzYjAyMzZhNDdkOXjTdLQ=: 00:25:49.035 14:08:28 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:25:49.035 14:08:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:25:49.035 14:08:28 -- host/auth.sh@68 -- # digest=sha512 00:25:49.035 14:08:28 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:25:49.035 14:08:28 -- host/auth.sh@68 -- # keyid=4 00:25:49.035 14:08:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:49.035 14:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.035 14:08:28 -- common/autotest_common.sh@10 -- # set +x 00:25:49.035 14:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.035 14:08:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:25:49.035 14:08:28 -- nvmf/common.sh@717 -- # local ip 00:25:49.035 14:08:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:49.035 14:08:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:49.035 14:08:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.035 14:08:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.036 14:08:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:49.036 14:08:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.036 14:08:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:49.036 14:08:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:49.036 14:08:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:49.036 14:08:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:49.036 14:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.036 14:08:28 -- common/autotest_common.sh@10 -- # set +x 00:25:49.604 nvme0n1 00:25:49.604 14:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.604 14:08:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:25:49.604 14:08:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.604 14:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.604 14:08:29 -- common/autotest_common.sh@10 -- # set +x 00:25:49.604 14:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.604 14:08:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.604 14:08:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.604 14:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.604 14:08:29 -- common/autotest_common.sh@10 -- # set +x 00:25:49.604 14:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.604 14:08:29 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:49.604 14:08:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:25:49.604 14:08:29 -- host/auth.sh@44 -- # digest=sha256 00:25:49.604 14:08:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:49.604 14:08:29 -- host/auth.sh@44 -- # keyid=1 00:25:49.604 14:08:29 -- host/auth.sh@45 -- # key=DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:49.604 14:08:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:25:49.604 14:08:29 -- host/auth.sh@48 -- # echo ffdhe2048 00:25:49.604 14:08:29 -- host/auth.sh@49 -- # echo DHHC-1:00:MWU0YTZhMjQ2YzkyNzlmZjk1ZmI0NzE1MTNlMTMzYmE2NmUzMzdkMWU1OTBhYWM0gnLjbQ==: 00:25:49.604 14:08:29 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:49.604 14:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.604 14:08:29 -- common/autotest_common.sh@10 -- # set +x 00:25:49.604 14:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.604 14:08:29 -- host/auth.sh@119 -- # get_main_ns_ip 00:25:49.604 14:08:29 -- nvmf/common.sh@717 -- # local ip 00:25:49.604 14:08:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:49.604 14:08:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:49.604 14:08:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.604 14:08:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.604 14:08:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:49.604 14:08:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.604 14:08:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:49.604 14:08:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:49.604 14:08:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:49.604 14:08:29 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:49.604 14:08:29 -- common/autotest_common.sh@638 -- # local es=0 00:25:49.604 14:08:29 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:49.604 14:08:29 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:49.604 14:08:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:49.604 14:08:29 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:49.604 14:08:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:49.604 14:08:29 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:49.604 14:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.604 14:08:29 -- common/autotest_common.sh@10 -- # set +x 00:25:49.604 2024/04/26 14:08:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:25:49.604 request: 00:25:49.604 { 00:25:49.604 "method": "bdev_nvme_attach_controller", 00:25:49.604 "params": { 00:25:49.604 "name": "nvme0", 00:25:49.604 "trtype": "tcp", 00:25:49.604 "traddr": "10.0.0.1", 00:25:49.604 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:49.604 "adrfam": "ipv4", 00:25:49.604 "trsvcid": "4420", 00:25:49.604 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:25:49.604 } 00:25:49.604 } 00:25:49.604 Got JSON-RPC error response 00:25:49.604 GoRPCClient: error on JSON-RPC call 00:25:49.604 14:08:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:49.604 14:08:29 -- common/autotest_common.sh@641 -- # es=1 00:25:49.604 14:08:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:49.604 14:08:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:49.604 14:08:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:49.604 14:08:29 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.604 14:08:29 -- host/auth.sh@121 -- # jq length 00:25:49.604 14:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.604 14:08:29 -- common/autotest_common.sh@10 -- # set +x 00:25:49.604 14:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.604 14:08:29 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:25:49.604 14:08:29 -- host/auth.sh@124 -- # get_main_ns_ip 00:25:49.604 14:08:29 -- nvmf/common.sh@717 -- # local ip 00:25:49.604 14:08:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:25:49.604 14:08:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:25:49.604 14:08:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.604 14:08:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.604 14:08:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:25:49.604 14:08:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.604 14:08:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:25:49.604 14:08:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:25:49.604 14:08:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:25:49.604 14:08:29 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:49.604 14:08:29 -- common/autotest_common.sh@638 -- # local es=0 00:25:49.604 14:08:29 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:49.604 14:08:29 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:25:49.604 14:08:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:49.604 14:08:29 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:25:49.604 14:08:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:49.604 14:08:29 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:49.604 14:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.604 14:08:29 -- common/autotest_common.sh@10 -- # set +x 00:25:49.604 2024/04/26 14:08:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:25:49.604 request: 00:25:49.604 { 00:25:49.604 "method": "bdev_nvme_attach_controller", 00:25:49.604 "params": { 00:25:49.604 "name": "nvme0", 00:25:49.604 "trtype": "tcp", 00:25:49.604 "traddr": "10.0.0.1", 00:25:49.604 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:49.604 "adrfam": "ipv4", 00:25:49.604 "trsvcid": "4420", 00:25:49.604 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:49.604 "dhchap_key": "key2" 00:25:49.604 } 00:25:49.604 } 00:25:49.604 Got JSON-RPC error response 00:25:49.604 GoRPCClient: error on JSON-RPC call 00:25:49.604 14:08:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:25:49.604 14:08:29 -- common/autotest_common.sh@641 -- # es=1 00:25:49.604 14:08:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:49.604 14:08:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:49.604 14:08:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:49.604 14:08:29 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.604 14:08:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:49.604 14:08:29 -- host/auth.sh@127 -- # jq length 00:25:49.604 14:08:29 -- common/autotest_common.sh@10 -- # set +x 00:25:49.604 14:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:49.604 14:08:29 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:25:49.604 14:08:29 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:25:49.604 14:08:29 -- host/auth.sh@130 -- # cleanup 00:25:49.604 14:08:29 -- host/auth.sh@24 -- # nvmftestfini 00:25:49.604 14:08:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:49.604 14:08:29 -- nvmf/common.sh@117 -- # sync 00:25:49.863 14:08:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:49.863 14:08:29 -- nvmf/common.sh@120 -- # set +e 00:25:49.863 14:08:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:49.863 14:08:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:49.863 rmmod nvme_tcp 00:25:49.863 rmmod nvme_fabrics 00:25:49.863 14:08:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:49.863 14:08:29 -- nvmf/common.sh@124 -- # set -e 00:25:49.863 14:08:29 -- nvmf/common.sh@125 -- # return 0 00:25:49.863 14:08:29 -- nvmf/common.sh@478 -- # '[' -n 86026 ']' 00:25:49.863 14:08:29 -- nvmf/common.sh@479 -- # killprocess 86026 00:25:49.863 14:08:29 -- common/autotest_common.sh@936 -- # '[' -z 86026 ']' 00:25:49.863 14:08:29 -- common/autotest_common.sh@940 -- # kill -0 86026 00:25:49.863 14:08:29 -- common/autotest_common.sh@941 -- # uname 00:25:49.863 14:08:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:49.863 14:08:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86026 00:25:49.863 14:08:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:49.863 killing process with pid 86026 00:25:49.863 14:08:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:49.863 14:08:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86026' 00:25:49.863 14:08:29 -- common/autotest_common.sh@955 -- # kill 86026 00:25:49.864 14:08:29 -- common/autotest_common.sh@960 -- # wait 86026 00:25:50.801 14:08:30 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:50.801 14:08:30 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:50.801 14:08:30 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:50.801 14:08:30 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:50.801 14:08:30 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:50.801 14:08:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.801 14:08:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.801 14:08:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.801 14:08:30 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:51.059 14:08:30 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:51.059 14:08:30 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:51.059 14:08:30 -- host/auth.sh@27 -- # clean_kernel_target 00:25:51.059 14:08:30 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:51.059 14:08:30 -- nvmf/common.sh@675 -- # echo 0 00:25:51.059 14:08:30 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:51.059 14:08:30 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:51.059 14:08:30 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:51.059 14:08:30 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:51.059 14:08:30 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:25:51.059 14:08:30 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:25:51.059 14:08:30 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:51.997 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:51.997 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:51.997 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:51.997 14:08:31 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.8q7 /tmp/spdk.key-null.6DU /tmp/spdk.key-sha256.ITx /tmp/spdk.key-sha384.1Z4 /tmp/spdk.key-sha512.bzj /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:25:51.997 14:08:31 -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:52.565 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:52.565 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:52.565 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:52.565 00:25:52.565 real 0m35.627s 00:25:52.565 user 0m32.207s 00:25:52.565 sys 0m5.032s 00:25:52.565 14:08:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:52.565 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:25:52.565 ************************************ 00:25:52.565 END TEST nvmf_auth 00:25:52.565 ************************************ 00:25:52.565 14:08:32 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:25:52.565 14:08:32 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:52.565 14:08:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:52.565 14:08:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:52.565 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:25:52.825 ************************************ 00:25:52.825 START TEST nvmf_digest 00:25:52.825 ************************************ 00:25:52.825 14:08:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:52.825 * Looking for test storage... 00:25:52.825 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:52.825 14:08:32 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:52.825 14:08:32 -- nvmf/common.sh@7 -- # uname -s 00:25:52.825 14:08:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.825 14:08:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.825 14:08:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.825 14:08:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.825 14:08:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.825 14:08:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.825 14:08:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.825 14:08:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.825 14:08:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.825 14:08:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.825 14:08:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:25:52.825 14:08:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:25:52.825 14:08:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.825 14:08:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.825 14:08:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:52.825 14:08:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.825 14:08:32 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:52.825 14:08:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.825 14:08:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.825 14:08:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.825 14:08:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.825 14:08:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.825 14:08:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.825 14:08:32 -- paths/export.sh@5 -- # export PATH 00:25:52.825 14:08:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.825 14:08:32 -- nvmf/common.sh@47 -- # : 0 00:25:52.825 14:08:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:52.825 14:08:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:52.826 14:08:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.826 14:08:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.826 14:08:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.826 14:08:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:52.826 14:08:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:52.826 14:08:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:52.826 14:08:32 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:52.826 14:08:32 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:52.826 14:08:32 -- host/digest.sh@16 -- # runtime=2 00:25:52.826 14:08:32 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:52.826 14:08:32 -- host/digest.sh@138 -- # nvmftestinit 00:25:52.826 14:08:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:52.826 14:08:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.826 14:08:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:52.826 14:08:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:52.826 14:08:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:52.826 14:08:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.826 14:08:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:52.826 14:08:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.085 14:08:32 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:53.085 14:08:32 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:53.085 14:08:32 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:53.085 14:08:32 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:53.085 14:08:32 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:53.085 14:08:32 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:53.085 14:08:32 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.085 14:08:32 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.085 14:08:32 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:53.085 14:08:32 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:53.085 14:08:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:53.085 14:08:32 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:53.085 14:08:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:53.085 14:08:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.085 14:08:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:53.085 14:08:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:53.085 14:08:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:53.085 14:08:32 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:53.085 14:08:32 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:53.085 14:08:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:53.085 Cannot find device "nvmf_tgt_br" 00:25:53.085 14:08:32 -- nvmf/common.sh@155 -- # true 00:25:53.085 14:08:32 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:53.085 Cannot find device "nvmf_tgt_br2" 00:25:53.085 14:08:32 -- nvmf/common.sh@156 -- # true 00:25:53.085 14:08:32 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:53.085 14:08:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:53.085 Cannot find device "nvmf_tgt_br" 00:25:53.085 14:08:32 -- nvmf/common.sh@158 -- # true 00:25:53.085 14:08:32 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:53.085 Cannot find device "nvmf_tgt_br2" 00:25:53.085 14:08:32 -- nvmf/common.sh@159 -- # true 00:25:53.085 14:08:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:53.085 14:08:32 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:53.085 14:08:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:53.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:53.085 14:08:32 -- nvmf/common.sh@162 -- # true 00:25:53.085 14:08:32 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:53.085 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:53.085 14:08:32 -- nvmf/common.sh@163 -- # true 00:25:53.085 14:08:32 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:53.085 14:08:32 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:53.085 14:08:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:53.085 14:08:32 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:53.085 14:08:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:53.085 14:08:32 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:53.085 14:08:32 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:53.344 14:08:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:53.344 14:08:32 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:53.344 14:08:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:53.344 14:08:32 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:53.344 14:08:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:53.344 14:08:32 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:53.344 14:08:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:53.344 14:08:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:53.344 14:08:32 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:53.345 14:08:32 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:53.345 14:08:32 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:53.345 14:08:32 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:53.345 14:08:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:53.345 14:08:32 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:53.345 14:08:32 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:53.345 14:08:32 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:53.345 14:08:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:53.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:25:53.345 00:25:53.345 --- 10.0.0.2 ping statistics --- 00:25:53.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.345 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:25:53.345 14:08:32 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:53.345 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:53.345 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:25:53.345 00:25:53.345 --- 10.0.0.3 ping statistics --- 00:25:53.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.345 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:25:53.345 14:08:32 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:53.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:25:53.345 00:25:53.345 --- 10.0.0.1 ping statistics --- 00:25:53.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.345 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:25:53.345 14:08:32 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.345 14:08:32 -- nvmf/common.sh@422 -- # return 0 00:25:53.345 14:08:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:53.345 14:08:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.345 14:08:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:53.345 14:08:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:53.345 14:08:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.345 14:08:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:53.345 14:08:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:53.345 14:08:32 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:53.345 14:08:32 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:53.345 14:08:32 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:53.345 14:08:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:53.345 14:08:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:53.345 14:08:32 -- common/autotest_common.sh@10 -- # set +x 00:25:53.604 ************************************ 00:25:53.604 START TEST nvmf_digest_clean 00:25:53.604 ************************************ 00:25:53.604 14:08:33 -- common/autotest_common.sh@1111 -- # run_digest 00:25:53.604 14:08:33 -- host/digest.sh@120 -- # local dsa_initiator 00:25:53.604 14:08:33 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:53.604 14:08:33 -- host/digest.sh@121 -- # dsa_initiator=false 00:25:53.604 14:08:33 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:53.604 14:08:33 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:53.604 14:08:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:53.604 14:08:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:53.604 14:08:33 -- common/autotest_common.sh@10 -- # set +x 00:25:53.604 14:08:33 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:53.604 14:08:33 -- nvmf/common.sh@470 -- # nvmfpid=87629 00:25:53.604 14:08:33 -- nvmf/common.sh@471 -- # waitforlisten 87629 00:25:53.604 14:08:33 -- common/autotest_common.sh@817 -- # '[' -z 87629 ']' 00:25:53.604 14:08:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.604 14:08:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:53.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.604 14:08:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.604 14:08:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:53.604 14:08:33 -- common/autotest_common.sh@10 -- # set +x 00:25:53.604 [2024-04-26 14:08:33.170520] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:53.604 [2024-04-26 14:08:33.170656] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.862 [2024-04-26 14:08:33.349881] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.121 [2024-04-26 14:08:33.585271] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.121 [2024-04-26 14:08:33.585320] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.121 [2024-04-26 14:08:33.585335] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.121 [2024-04-26 14:08:33.585357] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.121 [2024-04-26 14:08:33.585371] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.121 [2024-04-26 14:08:33.585408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.381 14:08:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:54.381 14:08:33 -- common/autotest_common.sh@850 -- # return 0 00:25:54.381 14:08:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:54.381 14:08:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:54.381 14:08:33 -- common/autotest_common.sh@10 -- # set +x 00:25:54.381 14:08:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.381 14:08:34 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:54.381 14:08:34 -- host/digest.sh@126 -- # common_target_config 00:25:54.381 14:08:34 -- host/digest.sh@43 -- # rpc_cmd 00:25:54.381 14:08:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:54.381 14:08:34 -- common/autotest_common.sh@10 -- # set +x 00:25:54.950 null0 00:25:54.950 [2024-04-26 14:08:34.410301] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.950 [2024-04-26 14:08:34.434404] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.950 14:08:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:54.950 14:08:34 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:54.950 14:08:34 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:54.950 14:08:34 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:54.950 14:08:34 -- host/digest.sh@80 -- # rw=randread 00:25:54.950 14:08:34 -- host/digest.sh@80 -- # bs=4096 00:25:54.950 14:08:34 -- host/digest.sh@80 -- # qd=128 00:25:54.950 14:08:34 -- host/digest.sh@80 -- # scan_dsa=false 00:25:54.950 14:08:34 -- host/digest.sh@83 -- # bperfpid=87683 00:25:54.950 14:08:34 -- host/digest.sh@84 -- # waitforlisten 87683 /var/tmp/bperf.sock 00:25:54.950 14:08:34 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:54.950 14:08:34 -- common/autotest_common.sh@817 -- # '[' -z 87683 ']' 00:25:54.950 14:08:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:54.950 14:08:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:54.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:54.950 14:08:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:54.950 14:08:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:54.950 14:08:34 -- common/autotest_common.sh@10 -- # set +x 00:25:54.950 [2024-04-26 14:08:34.532240] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:25:54.950 [2024-04-26 14:08:34.532832] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87683 ] 00:25:55.208 [2024-04-26 14:08:34.702867] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.466 [2024-04-26 14:08:34.939898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.724 14:08:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:55.724 14:08:35 -- common/autotest_common.sh@850 -- # return 0 00:25:55.724 14:08:35 -- host/digest.sh@86 -- # false 00:25:55.724 14:08:35 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:55.724 14:08:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:56.291 14:08:35 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:56.291 14:08:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:56.550 nvme0n1 00:25:56.808 14:08:36 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:56.808 14:08:36 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:56.808 Running I/O for 2 seconds... 00:25:58.762 00:25:58.762 Latency(us) 00:25:58.762 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.762 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:58.762 nvme0n1 : 2.00 20368.01 79.56 0.00 0.00 6275.92 3632.12 12633.45 00:25:58.762 =================================================================================================================== 00:25:58.762 Total : 20368.01 79.56 0.00 0.00 6275.92 3632.12 12633.45 00:25:58.762 0 00:25:58.762 14:08:38 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:58.762 14:08:38 -- host/digest.sh@93 -- # get_accel_stats 00:25:58.762 14:08:38 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:58.762 14:08:38 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:58.762 | select(.opcode=="crc32c") 00:25:58.762 | "\(.module_name) \(.executed)"' 00:25:58.762 14:08:38 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:59.021 14:08:38 -- host/digest.sh@94 -- # false 00:25:59.021 14:08:38 -- host/digest.sh@94 -- # exp_module=software 00:25:59.021 14:08:38 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:59.021 14:08:38 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:59.021 14:08:38 -- host/digest.sh@98 -- # killprocess 87683 00:25:59.021 14:08:38 -- common/autotest_common.sh@936 -- # '[' -z 87683 ']' 00:25:59.021 14:08:38 -- common/autotest_common.sh@940 -- # kill -0 87683 00:25:59.021 14:08:38 -- common/autotest_common.sh@941 -- # uname 00:25:59.021 14:08:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:59.021 14:08:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87683 00:25:59.021 14:08:38 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:59.021 14:08:38 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:59.021 killing process with pid 87683 00:25:59.021 14:08:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87683' 00:25:59.021 14:08:38 -- common/autotest_common.sh@955 -- # kill 87683 00:25:59.021 Received shutdown signal, test time was about 2.000000 seconds 00:25:59.021 00:25:59.021 Latency(us) 00:25:59.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.021 =================================================================================================================== 00:25:59.021 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:59.021 14:08:38 -- common/autotest_common.sh@960 -- # wait 87683 00:26:00.394 14:08:39 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:00.394 14:08:39 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:00.394 14:08:39 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:00.394 14:08:39 -- host/digest.sh@80 -- # rw=randread 00:26:00.394 14:08:39 -- host/digest.sh@80 -- # bs=131072 00:26:00.394 14:08:39 -- host/digest.sh@80 -- # qd=16 00:26:00.394 14:08:39 -- host/digest.sh@80 -- # scan_dsa=false 00:26:00.394 14:08:39 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:00.394 14:08:39 -- host/digest.sh@83 -- # bperfpid=87780 00:26:00.394 14:08:39 -- host/digest.sh@84 -- # waitforlisten 87780 /var/tmp/bperf.sock 00:26:00.394 14:08:39 -- common/autotest_common.sh@817 -- # '[' -z 87780 ']' 00:26:00.394 14:08:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:00.394 14:08:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:00.394 14:08:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:00.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:00.394 14:08:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:00.394 14:08:39 -- common/autotest_common.sh@10 -- # set +x 00:26:00.394 [2024-04-26 14:08:39.709847] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:00.394 [2024-04-26 14:08:39.709972] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87780 ] 00:26:00.394 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:00.394 Zero copy mechanism will not be used. 00:26:00.394 [2024-04-26 14:08:39.880102] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.653 [2024-04-26 14:08:40.111232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.912 14:08:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:00.912 14:08:40 -- common/autotest_common.sh@850 -- # return 0 00:26:00.912 14:08:40 -- host/digest.sh@86 -- # false 00:26:00.912 14:08:40 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:00.912 14:08:40 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:01.481 14:08:41 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:01.481 14:08:41 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:01.741 nvme0n1 00:26:01.741 14:08:41 -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:01.741 14:08:41 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:02.000 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:02.000 Zero copy mechanism will not be used. 00:26:02.000 Running I/O for 2 seconds... 00:26:03.904 00:26:03.904 Latency(us) 00:26:03.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.904 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:03.904 nvme0n1 : 2.00 7596.73 949.59 0.00 0.00 2103.15 556.00 9001.33 00:26:03.904 =================================================================================================================== 00:26:03.904 Total : 7596.73 949.59 0.00 0.00 2103.15 556.00 9001.33 00:26:03.904 0 00:26:03.904 14:08:43 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:03.904 14:08:43 -- host/digest.sh@93 -- # get_accel_stats 00:26:03.904 14:08:43 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:03.904 14:08:43 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:03.904 14:08:43 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:03.904 | select(.opcode=="crc32c") 00:26:03.904 | "\(.module_name) \(.executed)"' 00:26:04.165 14:08:43 -- host/digest.sh@94 -- # false 00:26:04.165 14:08:43 -- host/digest.sh@94 -- # exp_module=software 00:26:04.165 14:08:43 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:04.165 14:08:43 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:04.165 14:08:43 -- host/digest.sh@98 -- # killprocess 87780 00:26:04.165 14:08:43 -- common/autotest_common.sh@936 -- # '[' -z 87780 ']' 00:26:04.165 14:08:43 -- common/autotest_common.sh@940 -- # kill -0 87780 00:26:04.165 14:08:43 -- common/autotest_common.sh@941 -- # uname 00:26:04.165 14:08:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:04.165 14:08:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87780 00:26:04.165 14:08:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:04.165 killing process with pid 87780 00:26:04.165 14:08:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:04.165 14:08:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87780' 00:26:04.165 Received shutdown signal, test time was about 2.000000 seconds 00:26:04.165 00:26:04.165 Latency(us) 00:26:04.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.165 =================================================================================================================== 00:26:04.165 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:04.165 14:08:43 -- common/autotest_common.sh@955 -- # kill 87780 00:26:04.165 14:08:43 -- common/autotest_common.sh@960 -- # wait 87780 00:26:05.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:05.551 14:08:44 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:05.551 14:08:44 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:05.551 14:08:44 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:05.551 14:08:44 -- host/digest.sh@80 -- # rw=randwrite 00:26:05.551 14:08:44 -- host/digest.sh@80 -- # bs=4096 00:26:05.551 14:08:44 -- host/digest.sh@80 -- # qd=128 00:26:05.551 14:08:44 -- host/digest.sh@80 -- # scan_dsa=false 00:26:05.551 14:08:44 -- host/digest.sh@83 -- # bperfpid=87882 00:26:05.551 14:08:44 -- host/digest.sh@84 -- # waitforlisten 87882 /var/tmp/bperf.sock 00:26:05.551 14:08:44 -- common/autotest_common.sh@817 -- # '[' -z 87882 ']' 00:26:05.551 14:08:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:05.551 14:08:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:05.552 14:08:44 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:05.552 14:08:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:05.552 14:08:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:05.552 14:08:44 -- common/autotest_common.sh@10 -- # set +x 00:26:05.552 [2024-04-26 14:08:45.049928] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:05.552 [2024-04-26 14:08:45.050529] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87882 ] 00:26:05.552 [2024-04-26 14:08:45.222109] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.811 [2024-04-26 14:08:45.456711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.381 14:08:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:06.381 14:08:45 -- common/autotest_common.sh@850 -- # return 0 00:26:06.382 14:08:45 -- host/digest.sh@86 -- # false 00:26:06.382 14:08:45 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:06.382 14:08:45 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:06.949 14:08:46 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.949 14:08:46 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:07.208 nvme0n1 00:26:07.208 14:08:46 -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:07.208 14:08:46 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:07.208 Running I/O for 2 seconds... 00:26:09.746 00:26:09.746 Latency(us) 00:26:09.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.746 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:09.746 nvme0n1 : 2.01 24111.34 94.18 0.00 0.00 5303.87 2526.69 8159.10 00:26:09.746 =================================================================================================================== 00:26:09.746 Total : 24111.34 94.18 0.00 0.00 5303.87 2526.69 8159.10 00:26:09.746 0 00:26:09.746 14:08:48 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:09.746 14:08:48 -- host/digest.sh@93 -- # get_accel_stats 00:26:09.746 14:08:48 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:09.746 14:08:48 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:09.746 | select(.opcode=="crc32c") 00:26:09.746 | "\(.module_name) \(.executed)"' 00:26:09.746 14:08:48 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:09.746 14:08:49 -- host/digest.sh@94 -- # false 00:26:09.746 14:08:49 -- host/digest.sh@94 -- # exp_module=software 00:26:09.746 14:08:49 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:09.746 14:08:49 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:09.746 14:08:49 -- host/digest.sh@98 -- # killprocess 87882 00:26:09.746 14:08:49 -- common/autotest_common.sh@936 -- # '[' -z 87882 ']' 00:26:09.746 14:08:49 -- common/autotest_common.sh@940 -- # kill -0 87882 00:26:09.746 14:08:49 -- common/autotest_common.sh@941 -- # uname 00:26:09.746 14:08:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:09.746 14:08:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87882 00:26:09.746 14:08:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:09.746 14:08:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:09.746 killing process with pid 87882 00:26:09.746 14:08:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87882' 00:26:09.746 14:08:49 -- common/autotest_common.sh@955 -- # kill 87882 00:26:09.746 Received shutdown signal, test time was about 2.000000 seconds 00:26:09.746 00:26:09.746 Latency(us) 00:26:09.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.746 =================================================================================================================== 00:26:09.746 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:09.746 14:08:49 -- common/autotest_common.sh@960 -- # wait 87882 00:26:10.683 14:08:50 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:10.683 14:08:50 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:10.683 14:08:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:10.683 14:08:50 -- host/digest.sh@80 -- # rw=randwrite 00:26:10.683 14:08:50 -- host/digest.sh@80 -- # bs=131072 00:26:10.683 14:08:50 -- host/digest.sh@80 -- # qd=16 00:26:10.683 14:08:50 -- host/digest.sh@80 -- # scan_dsa=false 00:26:10.683 14:08:50 -- host/digest.sh@83 -- # bperfpid=87980 00:26:10.683 14:08:50 -- host/digest.sh@84 -- # waitforlisten 87980 /var/tmp/bperf.sock 00:26:10.683 14:08:50 -- common/autotest_common.sh@817 -- # '[' -z 87980 ']' 00:26:10.683 14:08:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:10.683 14:08:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:10.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:10.683 14:08:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:10.683 14:08:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:10.683 14:08:50 -- common/autotest_common.sh@10 -- # set +x 00:26:10.683 14:08:50 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:10.683 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:10.683 Zero copy mechanism will not be used. 00:26:10.683 [2024-04-26 14:08:50.169542] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:10.683 [2024-04-26 14:08:50.169667] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87980 ] 00:26:10.683 [2024-04-26 14:08:50.339279] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.976 [2024-04-26 14:08:50.572417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.544 14:08:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:11.544 14:08:50 -- common/autotest_common.sh@850 -- # return 0 00:26:11.544 14:08:50 -- host/digest.sh@86 -- # false 00:26:11.544 14:08:50 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:11.544 14:08:50 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:12.112 14:08:51 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.112 14:08:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:12.372 nvme0n1 00:26:12.372 14:08:51 -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:12.372 14:08:51 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:12.372 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:12.372 Zero copy mechanism will not be used. 00:26:12.372 Running I/O for 2 seconds... 00:26:14.332 00:26:14.332 Latency(us) 00:26:14.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.332 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:14.332 nvme0n1 : 2.00 6063.28 757.91 0.00 0.00 2633.71 2131.89 9317.17 00:26:14.332 =================================================================================================================== 00:26:14.332 Total : 6063.28 757.91 0.00 0.00 2633.71 2131.89 9317.17 00:26:14.332 0 00:26:14.332 14:08:53 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:14.332 14:08:53 -- host/digest.sh@93 -- # get_accel_stats 00:26:14.332 14:08:53 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:14.332 | select(.opcode=="crc32c") 00:26:14.332 | "\(.module_name) \(.executed)"' 00:26:14.332 14:08:53 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:14.332 14:08:53 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:14.590 14:08:54 -- host/digest.sh@94 -- # false 00:26:14.590 14:08:54 -- host/digest.sh@94 -- # exp_module=software 00:26:14.590 14:08:54 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:14.590 14:08:54 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:14.590 14:08:54 -- host/digest.sh@98 -- # killprocess 87980 00:26:14.590 14:08:54 -- common/autotest_common.sh@936 -- # '[' -z 87980 ']' 00:26:14.590 14:08:54 -- common/autotest_common.sh@940 -- # kill -0 87980 00:26:14.590 14:08:54 -- common/autotest_common.sh@941 -- # uname 00:26:14.590 14:08:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:14.590 14:08:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87980 00:26:14.590 14:08:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:14.590 killing process with pid 87980 00:26:14.590 Received shutdown signal, test time was about 2.000000 seconds 00:26:14.590 00:26:14.590 Latency(us) 00:26:14.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.590 =================================================================================================================== 00:26:14.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:14.590 14:08:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:14.590 14:08:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87980' 00:26:14.590 14:08:54 -- common/autotest_common.sh@955 -- # kill 87980 00:26:14.590 14:08:54 -- common/autotest_common.sh@960 -- # wait 87980 00:26:15.966 14:08:55 -- host/digest.sh@132 -- # killprocess 87629 00:26:15.967 14:08:55 -- common/autotest_common.sh@936 -- # '[' -z 87629 ']' 00:26:15.967 14:08:55 -- common/autotest_common.sh@940 -- # kill -0 87629 00:26:15.967 14:08:55 -- common/autotest_common.sh@941 -- # uname 00:26:15.967 14:08:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:15.967 14:08:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87629 00:26:15.967 killing process with pid 87629 00:26:15.967 14:08:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:15.967 14:08:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:15.967 14:08:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87629' 00:26:15.967 14:08:55 -- common/autotest_common.sh@955 -- # kill 87629 00:26:15.967 14:08:55 -- common/autotest_common.sh@960 -- # wait 87629 00:26:17.344 00:26:17.344 real 0m23.886s 00:26:17.344 user 0m43.286s 00:26:17.344 sys 0m5.159s 00:26:17.344 14:08:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:17.344 ************************************ 00:26:17.344 END TEST nvmf_digest_clean 00:26:17.344 ************************************ 00:26:17.344 14:08:56 -- common/autotest_common.sh@10 -- # set +x 00:26:17.344 14:08:56 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:17.344 14:08:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:17.344 14:08:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:17.344 14:08:56 -- common/autotest_common.sh@10 -- # set +x 00:26:17.603 ************************************ 00:26:17.603 START TEST nvmf_digest_error 00:26:17.603 ************************************ 00:26:17.603 14:08:57 -- common/autotest_common.sh@1111 -- # run_digest_error 00:26:17.603 14:08:57 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:17.603 14:08:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:17.603 14:08:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:17.603 14:08:57 -- common/autotest_common.sh@10 -- # set +x 00:26:17.603 14:08:57 -- nvmf/common.sh@470 -- # nvmfpid=88128 00:26:17.603 14:08:57 -- nvmf/common.sh@471 -- # waitforlisten 88128 00:26:17.603 14:08:57 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:17.603 14:08:57 -- common/autotest_common.sh@817 -- # '[' -z 88128 ']' 00:26:17.603 14:08:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.603 14:08:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:17.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.603 14:08:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.603 14:08:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:17.603 14:08:57 -- common/autotest_common.sh@10 -- # set +x 00:26:17.603 [2024-04-26 14:08:57.191367] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:17.603 [2024-04-26 14:08:57.191508] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.862 [2024-04-26 14:08:57.364119] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.121 [2024-04-26 14:08:57.597167] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.121 [2024-04-26 14:08:57.597217] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.121 [2024-04-26 14:08:57.597233] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:18.121 [2024-04-26 14:08:57.597254] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:18.121 [2024-04-26 14:08:57.597267] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.121 [2024-04-26 14:08:57.597308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.380 14:08:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:18.380 14:08:57 -- common/autotest_common.sh@850 -- # return 0 00:26:18.380 14:08:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:18.380 14:08:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:18.380 14:08:57 -- common/autotest_common.sh@10 -- # set +x 00:26:18.380 14:08:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.380 14:08:58 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:18.380 14:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.380 14:08:58 -- common/autotest_common.sh@10 -- # set +x 00:26:18.380 [2024-04-26 14:08:58.041330] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:18.380 14:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.380 14:08:58 -- host/digest.sh@105 -- # common_target_config 00:26:18.380 14:08:58 -- host/digest.sh@43 -- # rpc_cmd 00:26:18.380 14:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.380 14:08:58 -- common/autotest_common.sh@10 -- # set +x 00:26:18.948 null0 00:26:18.948 [2024-04-26 14:08:58.439906] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.948 [2024-04-26 14:08:58.463994] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.948 14:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.948 14:08:58 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:18.948 14:08:58 -- host/digest.sh@54 -- # local rw bs qd 00:26:18.948 14:08:58 -- host/digest.sh@56 -- # rw=randread 00:26:18.948 14:08:58 -- host/digest.sh@56 -- # bs=4096 00:26:18.948 14:08:58 -- host/digest.sh@56 -- # qd=128 00:26:18.948 14:08:58 -- host/digest.sh@58 -- # bperfpid=88172 00:26:18.948 14:08:58 -- host/digest.sh@60 -- # waitforlisten 88172 /var/tmp/bperf.sock 00:26:18.948 14:08:58 -- common/autotest_common.sh@817 -- # '[' -z 88172 ']' 00:26:18.948 14:08:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:18.948 14:08:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:18.948 14:08:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:18.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:18.948 14:08:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:18.948 14:08:58 -- common/autotest_common.sh@10 -- # set +x 00:26:18.948 14:08:58 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:18.948 [2024-04-26 14:08:58.562258] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:18.948 [2024-04-26 14:08:58.562381] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88172 ] 00:26:19.207 [2024-04-26 14:08:58.724485] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.467 [2024-04-26 14:08:58.978914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.726 14:08:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:19.726 14:08:59 -- common/autotest_common.sh@850 -- # return 0 00:26:19.726 14:08:59 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:19.726 14:08:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:19.985 14:08:59 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:19.985 14:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:19.985 14:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:19.985 14:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:19.985 14:08:59 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.985 14:08:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:20.245 nvme0n1 00:26:20.245 14:08:59 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:20.245 14:08:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.245 14:08:59 -- common/autotest_common.sh@10 -- # set +x 00:26:20.245 14:08:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.245 14:08:59 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:20.245 14:08:59 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:20.504 Running I/O for 2 seconds... 00:26:20.504 [2024-04-26 14:09:00.008216] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.504 [2024-04-26 14:09:00.008320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.504 [2024-04-26 14:09:00.008354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.504 [2024-04-26 14:09:00.022873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.504 [2024-04-26 14:09:00.022948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.504 [2024-04-26 14:09:00.022967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.504 [2024-04-26 14:09:00.038689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.504 [2024-04-26 14:09:00.038769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.504 [2024-04-26 14:09:00.038792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.504 [2024-04-26 14:09:00.052127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.504 [2024-04-26 14:09:00.052206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.504 [2024-04-26 14:09:00.052226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.504 [2024-04-26 14:09:00.065353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.504 [2024-04-26 14:09:00.065432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.504 [2024-04-26 14:09:00.065459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.504 [2024-04-26 14:09:00.079093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.504 [2024-04-26 14:09:00.079166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.504 [2024-04-26 14:09:00.079186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.504 [2024-04-26 14:09:00.092777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.504 [2024-04-26 14:09:00.092835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.504 [2024-04-26 14:09:00.092854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.504 [2024-04-26 14:09:00.106018] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.504 [2024-04-26 14:09:00.106077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.504 [2024-04-26 14:09:00.106098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.504 [2024-04-26 14:09:00.119883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.504 [2024-04-26 14:09:00.119944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.504 [2024-04-26 14:09:00.119963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.504 [2024-04-26 14:09:00.133293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.504 [2024-04-26 14:09:00.133351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.504 [2024-04-26 14:09:00.133370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.504 [2024-04-26 14:09:00.146337] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.504 [2024-04-26 14:09:00.146394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.504 [2024-04-26 14:09:00.146412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.504 [2024-04-26 14:09:00.159946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.505 [2024-04-26 14:09:00.160008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.505 [2024-04-26 14:09:00.160027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.505 [2024-04-26 14:09:00.173678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.505 [2024-04-26 14:09:00.173741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.505 [2024-04-26 14:09:00.173759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.764 [2024-04-26 14:09:00.187286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.764 [2024-04-26 14:09:00.187355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.764 [2024-04-26 14:09:00.187380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.764 [2024-04-26 14:09:00.200834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.764 [2024-04-26 14:09:00.200900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.764 [2024-04-26 14:09:00.200921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.764 [2024-04-26 14:09:00.215254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.764 [2024-04-26 14:09:00.215308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.764 [2024-04-26 14:09:00.215327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.764 [2024-04-26 14:09:00.228935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.764 [2024-04-26 14:09:00.228997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.764 [2024-04-26 14:09:00.229016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.764 [2024-04-26 14:09:00.242468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.764 [2024-04-26 14:09:00.242531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.764 [2024-04-26 14:09:00.242550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.764 [2024-04-26 14:09:00.255915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.764 [2024-04-26 14:09:00.255983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.764 [2024-04-26 14:09:00.256001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.764 [2024-04-26 14:09:00.270148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.764 [2024-04-26 14:09:00.270229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.764 [2024-04-26 14:09:00.270249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.764 [2024-04-26 14:09:00.283408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.764 [2024-04-26 14:09:00.283469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.765 [2024-04-26 14:09:00.283487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.765 [2024-04-26 14:09:00.297640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.765 [2024-04-26 14:09:00.297706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.765 [2024-04-26 14:09:00.297733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.765 [2024-04-26 14:09:00.311549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.765 [2024-04-26 14:09:00.311622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.765 [2024-04-26 14:09:00.311642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.765 [2024-04-26 14:09:00.323533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.765 [2024-04-26 14:09:00.323587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.765 [2024-04-26 14:09:00.323605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.765 [2024-04-26 14:09:00.337653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.765 [2024-04-26 14:09:00.337709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.765 [2024-04-26 14:09:00.337729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.765 [2024-04-26 14:09:00.350328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.765 [2024-04-26 14:09:00.350381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.765 [2024-04-26 14:09:00.350399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.765 [2024-04-26 14:09:00.363581] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.765 [2024-04-26 14:09:00.363636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.765 [2024-04-26 14:09:00.363654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.765 [2024-04-26 14:09:00.377699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.765 [2024-04-26 14:09:00.377756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.765 [2024-04-26 14:09:00.377775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.765 [2024-04-26 14:09:00.391559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.765 [2024-04-26 14:09:00.391624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.765 [2024-04-26 14:09:00.391644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.765 [2024-04-26 14:09:00.404780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.765 [2024-04-26 14:09:00.404846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.765 [2024-04-26 14:09:00.404865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.765 [2024-04-26 14:09:00.419103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.765 [2024-04-26 14:09:00.419179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.765 [2024-04-26 14:09:00.419219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.765 [2024-04-26 14:09:00.433232] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:20.765 [2024-04-26 14:09:00.433298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.765 [2024-04-26 14:09:00.433316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.024 [2024-04-26 14:09:00.446937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.024 [2024-04-26 14:09:00.447002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.024 [2024-04-26 14:09:00.447021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.024 [2024-04-26 14:09:00.460908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.024 [2024-04-26 14:09:00.460971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.024 [2024-04-26 14:09:00.460991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.024 [2024-04-26 14:09:00.473144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.024 [2024-04-26 14:09:00.473240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.024 [2024-04-26 14:09:00.473267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.024 [2024-04-26 14:09:00.486091] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.024 [2024-04-26 14:09:00.486163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.024 [2024-04-26 14:09:00.486184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.024 [2024-04-26 14:09:00.499338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.024 [2024-04-26 14:09:00.499397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.024 [2024-04-26 14:09:00.499415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.024 [2024-04-26 14:09:00.511797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.024 [2024-04-26 14:09:00.511852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.024 [2024-04-26 14:09:00.511870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.024 [2024-04-26 14:09:00.525235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.024 [2024-04-26 14:09:00.525295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.024 [2024-04-26 14:09:00.525320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.024 [2024-04-26 14:09:00.538910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.024 [2024-04-26 14:09:00.538962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.024 [2024-04-26 14:09:00.538981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.024 [2024-04-26 14:09:00.552379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.024 [2024-04-26 14:09:00.552442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.024 [2024-04-26 14:09:00.552465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.024 [2024-04-26 14:09:00.566571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.024 [2024-04-26 14:09:00.566626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.024 [2024-04-26 14:09:00.566644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.024 [2024-04-26 14:09:00.580595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.024 [2024-04-26 14:09:00.580653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.024 [2024-04-26 14:09:00.580673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.024 [2024-04-26 14:09:00.593905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.024 [2024-04-26 14:09:00.593980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.024 [2024-04-26 14:09:00.594000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.024 [2024-04-26 14:09:00.606989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.024 [2024-04-26 14:09:00.607044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.024 [2024-04-26 14:09:00.607062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.024 [2024-04-26 14:09:00.620368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.024 [2024-04-26 14:09:00.620426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.024 [2024-04-26 14:09:00.620444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.024 [2024-04-26 14:09:00.634295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.024 [2024-04-26 14:09:00.634352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.024 [2024-04-26 14:09:00.634371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.024 [2024-04-26 14:09:00.647032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.024 [2024-04-26 14:09:00.647091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.025 [2024-04-26 14:09:00.647108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.025 [2024-04-26 14:09:00.660619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.025 [2024-04-26 14:09:00.660676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.025 [2024-04-26 14:09:00.660695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.025 [2024-04-26 14:09:00.674535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.025 [2024-04-26 14:09:00.674603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.025 [2024-04-26 14:09:00.674632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.025 [2024-04-26 14:09:00.688684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.025 [2024-04-26 14:09:00.688744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.025 [2024-04-26 14:09:00.688762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.284 [2024-04-26 14:09:00.701984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.284 [2024-04-26 14:09:00.702039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.284 [2024-04-26 14:09:00.702057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.284 [2024-04-26 14:09:00.714884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.284 [2024-04-26 14:09:00.714941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.284 [2024-04-26 14:09:00.714960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.284 [2024-04-26 14:09:00.729738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.284 [2024-04-26 14:09:00.729798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.284 [2024-04-26 14:09:00.729817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.284 [2024-04-26 14:09:00.741919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.284 [2024-04-26 14:09:00.741973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.284 [2024-04-26 14:09:00.741991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.284 [2024-04-26 14:09:00.754760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.284 [2024-04-26 14:09:00.754825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.284 [2024-04-26 14:09:00.754850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.284 [2024-04-26 14:09:00.769586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.284 [2024-04-26 14:09:00.769647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.284 [2024-04-26 14:09:00.769665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.284 [2024-04-26 14:09:00.783943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.284 [2024-04-26 14:09:00.783999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.284 [2024-04-26 14:09:00.784017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.284 [2024-04-26 14:09:00.795577] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.285 [2024-04-26 14:09:00.795633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.285 [2024-04-26 14:09:00.795651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.285 [2024-04-26 14:09:00.808339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.285 [2024-04-26 14:09:00.808391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.285 [2024-04-26 14:09:00.808409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.285 [2024-04-26 14:09:00.822205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.285 [2024-04-26 14:09:00.822261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.285 [2024-04-26 14:09:00.822279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.285 [2024-04-26 14:09:00.838202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.285 [2024-04-26 14:09:00.838256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.285 [2024-04-26 14:09:00.838275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.285 [2024-04-26 14:09:00.851597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.285 [2024-04-26 14:09:00.851677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.285 [2024-04-26 14:09:00.851702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.285 [2024-04-26 14:09:00.865139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.285 [2024-04-26 14:09:00.865216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.285 [2024-04-26 14:09:00.865235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.285 [2024-04-26 14:09:00.878895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.285 [2024-04-26 14:09:00.878950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.285 [2024-04-26 14:09:00.878968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.285 [2024-04-26 14:09:00.892120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.285 [2024-04-26 14:09:00.892189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.285 [2024-04-26 14:09:00.892209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.285 [2024-04-26 14:09:00.905190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.285 [2024-04-26 14:09:00.905239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.285 [2024-04-26 14:09:00.905257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.285 [2024-04-26 14:09:00.918965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.285 [2024-04-26 14:09:00.919026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.285 [2024-04-26 14:09:00.919045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.285 [2024-04-26 14:09:00.932222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.285 [2024-04-26 14:09:00.932275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.285 [2024-04-26 14:09:00.932294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.285 [2024-04-26 14:09:00.944887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.285 [2024-04-26 14:09:00.944939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.285 [2024-04-26 14:09:00.944956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:00.959113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:00.959182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:8866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:00.959200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:00.972507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:00.972569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:00.972595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:00.986338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:00.986392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:00.986411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:00.997200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:00.997268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:00.997287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:01.010411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:01.010463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:01.010481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:01.025523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:01.025599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:01.025618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:01.038494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:01.038557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:01.038576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:01.052288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:01.052351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:01.052370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:01.065688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:01.065743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:01.065761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:01.078874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:01.078932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:01.078950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:01.092196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:01.092253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:01.092272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:01.104917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:01.104970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:01.104988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:01.121104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:01.121175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:01.121194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:01.134745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:01.134802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:01.134821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:01.148813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:01.148869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:01.148888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:01.161905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:01.161958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:01.161976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:01.175792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:01.175855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:01.175879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:01.189139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:01.189203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:01.189222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:01.202397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.544 [2024-04-26 14:09:01.202449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.544 [2024-04-26 14:09:01.202466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.544 [2024-04-26 14:09:01.216029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.545 [2024-04-26 14:09:01.216086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.545 [2024-04-26 14:09:01.216104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.804 [2024-04-26 14:09:01.229093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.804 [2024-04-26 14:09:01.229145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.804 [2024-04-26 14:09:01.229177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.804 [2024-04-26 14:09:01.242409] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.804 [2024-04-26 14:09:01.242475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.804 [2024-04-26 14:09:01.242500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.804 [2024-04-26 14:09:01.255967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.804 [2024-04-26 14:09:01.256021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.804 [2024-04-26 14:09:01.256039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.804 [2024-04-26 14:09:01.269764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.804 [2024-04-26 14:09:01.269828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.804 [2024-04-26 14:09:01.269863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.804 [2024-04-26 14:09:01.284242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.804 [2024-04-26 14:09:01.284306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.804 [2024-04-26 14:09:01.284332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.804 [2024-04-26 14:09:01.297668] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.804 [2024-04-26 14:09:01.297723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.804 [2024-04-26 14:09:01.297742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.804 [2024-04-26 14:09:01.310789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.804 [2024-04-26 14:09:01.310844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.804 [2024-04-26 14:09:01.310862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.804 [2024-04-26 14:09:01.324233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.804 [2024-04-26 14:09:01.324281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.804 [2024-04-26 14:09:01.324299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.804 [2024-04-26 14:09:01.336975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.804 [2024-04-26 14:09:01.337029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.804 [2024-04-26 14:09:01.337047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.804 [2024-04-26 14:09:01.350754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.804 [2024-04-26 14:09:01.350808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.804 [2024-04-26 14:09:01.350825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.804 [2024-04-26 14:09:01.362970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.804 [2024-04-26 14:09:01.363024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.804 [2024-04-26 14:09:01.363042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.804 [2024-04-26 14:09:01.378641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.804 [2024-04-26 14:09:01.378696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.804 [2024-04-26 14:09:01.378714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.804 [2024-04-26 14:09:01.392623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.804 [2024-04-26 14:09:01.392677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.804 [2024-04-26 14:09:01.392695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.804 [2024-04-26 14:09:01.406051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.804 [2024-04-26 14:09:01.406110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.804 [2024-04-26 14:09:01.406130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.804 [2024-04-26 14:09:01.419364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.804 [2024-04-26 14:09:01.419419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.804 [2024-04-26 14:09:01.419437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.804 [2024-04-26 14:09:01.432261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.804 [2024-04-26 14:09:01.432309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.804 [2024-04-26 14:09:01.432327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.804 [2024-04-26 14:09:01.445965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.804 [2024-04-26 14:09:01.446017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.804 [2024-04-26 14:09:01.446034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.804 [2024-04-26 14:09:01.458934] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.804 [2024-04-26 14:09:01.458988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.805 [2024-04-26 14:09:01.459007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.805 [2024-04-26 14:09:01.472438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:21.805 [2024-04-26 14:09:01.472500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.805 [2024-04-26 14:09:01.472525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.485991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.486046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.486064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.498991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.499044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.499062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.512542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.512596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.512613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.525512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.525565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.525582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.538386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.538458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.538478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.551573] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.551624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.551641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.564569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.564634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.564659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.578358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.578421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.578447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.591997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.592054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.592073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.605619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.605673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.605692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.618946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.618999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.619016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.632131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.632196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.632214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.646237] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.646301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.646319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.658900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.658955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.658974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.672890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.672947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.672966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.686800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.686854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.686872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.699408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.699458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.699476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.712646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.712700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.712718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.065 [2024-04-26 14:09:01.725490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.065 [2024-04-26 14:09:01.725542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.065 [2024-04-26 14:09:01.725561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.324 [2024-04-26 14:09:01.739058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.324 [2024-04-26 14:09:01.739112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.324 [2024-04-26 14:09:01.739132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.324 [2024-04-26 14:09:01.752563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.325 [2024-04-26 14:09:01.752617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.325 [2024-04-26 14:09:01.752635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.325 [2024-04-26 14:09:01.766474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.325 [2024-04-26 14:09:01.766535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.325 [2024-04-26 14:09:01.766561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.325 [2024-04-26 14:09:01.779936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.325 [2024-04-26 14:09:01.780000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.325 [2024-04-26 14:09:01.780026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.325 [2024-04-26 14:09:01.793730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.325 [2024-04-26 14:09:01.793788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.325 [2024-04-26 14:09:01.793806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.325 [2024-04-26 14:09:01.807610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.325 [2024-04-26 14:09:01.807664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.325 [2024-04-26 14:09:01.807681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.325 [2024-04-26 14:09:01.820407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.325 [2024-04-26 14:09:01.820457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.325 [2024-04-26 14:09:01.820475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.325 [2024-04-26 14:09:01.833772] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.325 [2024-04-26 14:09:01.833823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.325 [2024-04-26 14:09:01.833851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.325 [2024-04-26 14:09:01.847486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.325 [2024-04-26 14:09:01.847540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.325 [2024-04-26 14:09:01.847559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.325 [2024-04-26 14:09:01.861170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.325 [2024-04-26 14:09:01.861234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.325 [2024-04-26 14:09:01.861252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.325 [2024-04-26 14:09:01.874886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.325 [2024-04-26 14:09:01.874955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.325 [2024-04-26 14:09:01.874974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.325 [2024-04-26 14:09:01.887809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.325 [2024-04-26 14:09:01.887861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.325 [2024-04-26 14:09:01.887878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.325 [2024-04-26 14:09:01.901058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.325 [2024-04-26 14:09:01.901112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.325 [2024-04-26 14:09:01.901130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.325 [2024-04-26 14:09:01.914319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.325 [2024-04-26 14:09:01.914372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.325 [2024-04-26 14:09:01.914389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.325 [2024-04-26 14:09:01.929467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.325 [2024-04-26 14:09:01.929518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.325 [2024-04-26 14:09:01.929535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.325 [2024-04-26 14:09:01.943343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.325 [2024-04-26 14:09:01.943397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.325 [2024-04-26 14:09:01.943416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.325 [2024-04-26 14:09:01.956455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.325 [2024-04-26 14:09:01.956530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.325 [2024-04-26 14:09:01.956554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.325 [2024-04-26 14:09:01.969989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.325 [2024-04-26 14:09:01.970041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.325 [2024-04-26 14:09:01.970059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.325 [2024-04-26 14:09:01.981511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:22.325 [2024-04-26 14:09:01.981563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.325 [2024-04-26 14:09:01.981581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:22.325 00:26:22.325 Latency(us) 00:26:22.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.325 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:22.325 nvme0n1 : 2.00 18741.15 73.21 0.00 0.00 6821.77 3092.56 19476.56 00:26:22.325 =================================================================================================================== 00:26:22.325 Total : 18741.15 73.21 0.00 0.00 6821.77 3092.56 19476.56 00:26:22.325 0 00:26:22.585 14:09:02 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:22.585 14:09:02 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:22.585 14:09:02 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:22.585 | .driver_specific 00:26:22.585 | .nvme_error 00:26:22.585 | .status_code 00:26:22.585 | .command_transient_transport_error' 00:26:22.585 14:09:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:22.585 14:09:02 -- host/digest.sh@71 -- # (( 147 > 0 )) 00:26:22.585 14:09:02 -- host/digest.sh@73 -- # killprocess 88172 00:26:22.585 14:09:02 -- common/autotest_common.sh@936 -- # '[' -z 88172 ']' 00:26:22.585 14:09:02 -- common/autotest_common.sh@940 -- # kill -0 88172 00:26:22.585 14:09:02 -- common/autotest_common.sh@941 -- # uname 00:26:22.585 14:09:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:22.585 14:09:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88172 00:26:22.845 14:09:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:22.845 killing process with pid 88172 00:26:22.845 Received shutdown signal, test time was about 2.000000 seconds 00:26:22.845 00:26:22.845 Latency(us) 00:26:22.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.845 =================================================================================================================== 00:26:22.845 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:22.845 14:09:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:22.845 14:09:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88172' 00:26:22.845 14:09:02 -- common/autotest_common.sh@955 -- # kill 88172 00:26:22.845 14:09:02 -- common/autotest_common.sh@960 -- # wait 88172 00:26:23.783 14:09:03 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:23.783 14:09:03 -- host/digest.sh@54 -- # local rw bs qd 00:26:23.783 14:09:03 -- host/digest.sh@56 -- # rw=randread 00:26:23.783 14:09:03 -- host/digest.sh@56 -- # bs=131072 00:26:23.783 14:09:03 -- host/digest.sh@56 -- # qd=16 00:26:23.783 14:09:03 -- host/digest.sh@58 -- # bperfpid=88269 00:26:23.783 14:09:03 -- host/digest.sh@60 -- # waitforlisten 88269 /var/tmp/bperf.sock 00:26:23.783 14:09:03 -- common/autotest_common.sh@817 -- # '[' -z 88269 ']' 00:26:23.783 14:09:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:23.783 14:09:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:23.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:23.783 14:09:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:23.783 14:09:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:23.783 14:09:03 -- common/autotest_common.sh@10 -- # set +x 00:26:23.783 14:09:03 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:23.783 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:23.783 Zero copy mechanism will not be used. 00:26:23.783 [2024-04-26 14:09:03.387520] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:23.783 [2024-04-26 14:09:03.387642] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88269 ] 00:26:24.085 [2024-04-26 14:09:03.559777] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.358 [2024-04-26 14:09:03.790618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.616 14:09:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:24.616 14:09:04 -- common/autotest_common.sh@850 -- # return 0 00:26:24.616 14:09:04 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:24.616 14:09:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:24.873 14:09:04 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:24.873 14:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:24.873 14:09:04 -- common/autotest_common.sh@10 -- # set +x 00:26:24.873 14:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.873 14:09:04 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.873 14:09:04 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.131 nvme0n1 00:26:25.131 14:09:04 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:25.131 14:09:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.131 14:09:04 -- common/autotest_common.sh@10 -- # set +x 00:26:25.131 14:09:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.131 14:09:04 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:25.131 14:09:04 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:25.131 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:25.131 Zero copy mechanism will not be used. 00:26:25.131 Running I/O for 2 seconds... 00:26:25.131 [2024-04-26 14:09:04.757403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.131 [2024-04-26 14:09:04.757477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.131 [2024-04-26 14:09:04.757498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.131 [2024-04-26 14:09:04.762537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.131 [2024-04-26 14:09:04.762599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.131 [2024-04-26 14:09:04.762617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.131 [2024-04-26 14:09:04.767474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.131 [2024-04-26 14:09:04.767528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.131 [2024-04-26 14:09:04.767546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.131 [2024-04-26 14:09:04.772473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.131 [2024-04-26 14:09:04.772530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.131 [2024-04-26 14:09:04.772547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.131 [2024-04-26 14:09:04.777633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.131 [2024-04-26 14:09:04.777686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.131 [2024-04-26 14:09:04.777704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.131 [2024-04-26 14:09:04.782484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.131 [2024-04-26 14:09:04.782540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.131 [2024-04-26 14:09:04.782558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.131 [2024-04-26 14:09:04.787337] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.131 [2024-04-26 14:09:04.787390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.131 [2024-04-26 14:09:04.787407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.131 [2024-04-26 14:09:04.792148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.131 [2024-04-26 14:09:04.792214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.131 [2024-04-26 14:09:04.792232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.131 [2024-04-26 14:09:04.796924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.131 [2024-04-26 14:09:04.796978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.131 [2024-04-26 14:09:04.796995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.131 [2024-04-26 14:09:04.801678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.131 [2024-04-26 14:09:04.801732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.131 [2024-04-26 14:09:04.801749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.806526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.806580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.806599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.811278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.811331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.811348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.816012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.816068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.816086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.821048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.821099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.821116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.825776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.825826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.825852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.830587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.830641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.830660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.835437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.835489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.835506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.840216] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.840262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.840280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.845097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.845165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.845184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.849887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.849937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.849955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.854730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.854783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.854801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.859475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.859529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.859546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.864304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.864358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.864376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.869305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.869356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.869374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.874132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.874196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.874214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.878863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.878916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.878933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.883731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.883786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.883803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.888581] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.888635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.888653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.893422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.893474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.893491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.898262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.898314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.898331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.903006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.903059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.903098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.907742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.907796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.907813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.912562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.912618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.912635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.917329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.392 [2024-04-26 14:09:04.917381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.392 [2024-04-26 14:09:04.917399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.392 [2024-04-26 14:09:04.922185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:04.922234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:04.922252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:04.926989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:04.927043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:04.927061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:04.931800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:04.931853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:04.931871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:04.936468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:04.936523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:04.936541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:04.941327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:04.941379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:04.941397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:04.945986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:04.946039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:04.946056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:04.950976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:04.951032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:04.951049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:04.955930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:04.955982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:04.955999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:04.960761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:04.960816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:04.960834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:04.965663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:04.965716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:04.965733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:04.970471] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:04.970525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:04.970542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:04.975212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:04.975264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:04.975281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:04.980182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:04.980231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:04.980248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:04.984984] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:04.985040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:04.985058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:04.989572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:04.989627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:04.989644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:04.994598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:04.994652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:04.994669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:04.999400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:04.999451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:04.999469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:05.003997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:05.004050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:05.004067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:05.008839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:05.008895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:05.008912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:05.013584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:05.013639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:05.013657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:05.018622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:05.018679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:05.018696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:05.023663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:05.023717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:05.023734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:05.028467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:05.028521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:05.028538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:05.033246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:05.033297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:05.033314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:05.038269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:05.038322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:05.038340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:05.043208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:05.043258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:05.043276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:05.048080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.393 [2024-04-26 14:09:05.048133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.393 [2024-04-26 14:09:05.048163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.393 [2024-04-26 14:09:05.053892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.394 [2024-04-26 14:09:05.053957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.394 [2024-04-26 14:09:05.053984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.394 [2024-04-26 14:09:05.059703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.394 [2024-04-26 14:09:05.059772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.394 [2024-04-26 14:09:05.059797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.654 [2024-04-26 14:09:05.065762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.654 [2024-04-26 14:09:05.065843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.654 [2024-04-26 14:09:05.065869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.654 [2024-04-26 14:09:05.071903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.654 [2024-04-26 14:09:05.071971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.654 [2024-04-26 14:09:05.071997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.654 [2024-04-26 14:09:05.077721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.654 [2024-04-26 14:09:05.077791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.654 [2024-04-26 14:09:05.077818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.654 [2024-04-26 14:09:05.083988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.654 [2024-04-26 14:09:05.084054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.654 [2024-04-26 14:09:05.084080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.654 [2024-04-26 14:09:05.089845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.654 [2024-04-26 14:09:05.089913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.654 [2024-04-26 14:09:05.089940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.654 [2024-04-26 14:09:05.096016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.654 [2024-04-26 14:09:05.096080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.654 [2024-04-26 14:09:05.096107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.654 [2024-04-26 14:09:05.101898] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.654 [2024-04-26 14:09:05.101961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.654 [2024-04-26 14:09:05.101988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.654 [2024-04-26 14:09:05.107754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.654 [2024-04-26 14:09:05.107822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.654 [2024-04-26 14:09:05.107849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.654 [2024-04-26 14:09:05.113822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.654 [2024-04-26 14:09:05.113904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.654 [2024-04-26 14:09:05.113932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.654 [2024-04-26 14:09:05.119845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.654 [2024-04-26 14:09:05.119914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.654 [2024-04-26 14:09:05.119943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.654 [2024-04-26 14:09:05.125732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.654 [2024-04-26 14:09:05.125817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.654 [2024-04-26 14:09:05.125852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.654 [2024-04-26 14:09:05.131864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.654 [2024-04-26 14:09:05.131932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.654 [2024-04-26 14:09:05.131958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.654 [2024-04-26 14:09:05.137655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.654 [2024-04-26 14:09:05.137722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.654 [2024-04-26 14:09:05.137750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.654 [2024-04-26 14:09:05.143646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.654 [2024-04-26 14:09:05.143712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.654 [2024-04-26 14:09:05.143738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.654 [2024-04-26 14:09:05.149395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.654 [2024-04-26 14:09:05.149460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.654 [2024-04-26 14:09:05.149486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.654 [2024-04-26 14:09:05.155414] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.654 [2024-04-26 14:09:05.155480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.654 [2024-04-26 14:09:05.155508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.654 [2024-04-26 14:09:05.161312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.654 [2024-04-26 14:09:05.161368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.654 [2024-04-26 14:09:05.161391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.167010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.167077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.167105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.173072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.173136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.173178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.178799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.178870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.178896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.184842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.184912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.184939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.190642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.190704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.190723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.196214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.196276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.196301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.201903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.201970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.201999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.207718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.207783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.207808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.213374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.213446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.213471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.219347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.219413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.219439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.225429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.225487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.225509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.231304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.231369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.231396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.237249] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.237314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.237339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.243371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.243437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.243462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.249098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.249177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.249204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.254891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.254959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.254983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.260916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.260978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.261002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.266577] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.266642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.266669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.272566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.272631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.272656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.278338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.278394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.278414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.284098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.284176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.284203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.290123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.290191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.290223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.296028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.296091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.296118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.301694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.301759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.301786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.307722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.307786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.307812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.313344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.313405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.655 [2024-04-26 14:09:05.313430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.655 [2024-04-26 14:09:05.319358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.655 [2024-04-26 14:09:05.319419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.656 [2024-04-26 14:09:05.319448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.656 [2024-04-26 14:09:05.324945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.656 [2024-04-26 14:09:05.325008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.656 [2024-04-26 14:09:05.325034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.330557] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.330621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.330651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.336355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.336416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.336442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.342107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.342182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.342208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.347802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.347860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.347884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.353508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.353570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.353595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.359225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.359287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.359313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.365048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.365113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.365136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.370929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.370992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.371014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.376870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.376935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.376961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.382632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.382691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.382711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.388676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.388739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.388765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.394367] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.394429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.394455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.400090] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.400145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.400183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.405683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.405762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.405788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.411516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.411582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.411609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.417391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.417453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.417476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.422950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.423018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.423046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.428914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.428975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.429000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.434735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.434800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.434829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.440510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.440573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.440599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.446101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.446179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.446207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.451961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.452021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.452044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.916 [2024-04-26 14:09:05.457957] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.916 [2024-04-26 14:09:05.458016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.916 [2024-04-26 14:09:05.458039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.463734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.463795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.463819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.469445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.469502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.469524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.475245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.475305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.475331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.480905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.480971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.480996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.486799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.486869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.486894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.492890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.492954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.492980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.498641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.498700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.498722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.504422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.504485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.504511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.510367] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.510424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.510450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.516243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.516305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.516332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.522468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.522530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.522556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.528454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.528515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.528541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.534458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.534522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.534548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.540624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.540687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.540711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.546543] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.546611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.546634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.552705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.552765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.552788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.558503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.558570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.558596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.564531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.564597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.564623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.570722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.570787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.570813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.576690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.576755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.576780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.582350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.582412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.582438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.917 [2024-04-26 14:09:05.588071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:25.917 [2024-04-26 14:09:05.588128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:25.917 [2024-04-26 14:09:05.588166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.177 [2024-04-26 14:09:05.593745] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.177 [2024-04-26 14:09:05.593809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.177 [2024-04-26 14:09:05.593849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.177 [2024-04-26 14:09:05.599610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.177 [2024-04-26 14:09:05.599674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.177 [2024-04-26 14:09:05.599699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.177 [2024-04-26 14:09:05.605297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.177 [2024-04-26 14:09:05.605360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.177 [2024-04-26 14:09:05.605387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.177 [2024-04-26 14:09:05.611183] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.177 [2024-04-26 14:09:05.611244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.177 [2024-04-26 14:09:05.611269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.177 [2024-04-26 14:09:05.616919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.177 [2024-04-26 14:09:05.616979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.177 [2024-04-26 14:09:05.617002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.177 [2024-04-26 14:09:05.622877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.177 [2024-04-26 14:09:05.622936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.177 [2024-04-26 14:09:05.622979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.177 [2024-04-26 14:09:05.628724] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.177 [2024-04-26 14:09:05.628787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.177 [2024-04-26 14:09:05.628812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.177 [2024-04-26 14:09:05.634677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.177 [2024-04-26 14:09:05.634920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.635041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.640553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.640612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.640635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.646404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.646472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.646498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.652263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.652324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.652349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.658196] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.658260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.658287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.664125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.664195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.664218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.669794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.669874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.669902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.675920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.675981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.676004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.681956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.682019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.682045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.687726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.687790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.687816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.693583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.693813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.694072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.699525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.699588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.699614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.705298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.705355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.705378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.711303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.711361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.711384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.717373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.717437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.717465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.723516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.723583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.723608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.729706] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.729767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.729790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.735570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.735635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.735662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.741602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.741663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.741688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.747432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.747486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.747507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.753072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.753132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.753170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.759023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.759077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.759097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.178 [2024-04-26 14:09:05.765070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.178 [2024-04-26 14:09:05.765337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.178 [2024-04-26 14:09:05.765489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.179 [2024-04-26 14:09:05.770842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.179 [2024-04-26 14:09:05.771052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.179 [2024-04-26 14:09:05.771252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.179 [2024-04-26 14:09:05.776734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.179 [2024-04-26 14:09:05.776878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.179 [2024-04-26 14:09:05.776914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.179 [2024-04-26 14:09:05.782672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.179 [2024-04-26 14:09:05.782730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.179 [2024-04-26 14:09:05.782757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.179 [2024-04-26 14:09:05.788473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.179 [2024-04-26 14:09:05.788529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.179 [2024-04-26 14:09:05.788561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.179 [2024-04-26 14:09:05.794339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.179 [2024-04-26 14:09:05.794395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.179 [2024-04-26 14:09:05.794422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.179 [2024-04-26 14:09:05.800302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.179 [2024-04-26 14:09:05.800525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.179 [2024-04-26 14:09:05.800659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.179 [2024-04-26 14:09:05.806026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.179 [2024-04-26 14:09:05.806264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.179 [2024-04-26 14:09:05.806394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.179 [2024-04-26 14:09:05.811860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.179 [2024-04-26 14:09:05.812085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.179 [2024-04-26 14:09:05.812246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.179 [2024-04-26 14:09:05.817694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.179 [2024-04-26 14:09:05.817922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.179 [2024-04-26 14:09:05.818065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.179 [2024-04-26 14:09:05.823701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.179 [2024-04-26 14:09:05.823762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.179 [2024-04-26 14:09:05.823794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.179 [2024-04-26 14:09:05.829397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.179 [2024-04-26 14:09:05.829454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.179 [2024-04-26 14:09:05.829480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.179 [2024-04-26 14:09:05.835473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.179 [2024-04-26 14:09:05.835712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.179 [2024-04-26 14:09:05.835873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.179 [2024-04-26 14:09:05.842177] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.179 [2024-04-26 14:09:05.842394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.179 [2024-04-26 14:09:05.842545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.179 [2024-04-26 14:09:05.848067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.179 [2024-04-26 14:09:05.848126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.179 [2024-04-26 14:09:05.848172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.439 [2024-04-26 14:09:05.854191] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.439 [2024-04-26 14:09:05.854416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-26 14:09:05.854557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.439 [2024-04-26 14:09:05.860194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.439 [2024-04-26 14:09:05.860410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-26 14:09:05.860569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.439 [2024-04-26 14:09:05.865877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.439 [2024-04-26 14:09:05.866107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-26 14:09:05.866261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.439 [2024-04-26 14:09:05.871952] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.439 [2024-04-26 14:09:05.872009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-26 14:09:05.872034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.439 [2024-04-26 14:09:05.878000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.439 [2024-04-26 14:09:05.878241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-26 14:09:05.878443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.439 [2024-04-26 14:09:05.883913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.439 [2024-04-26 14:09:05.884148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-26 14:09:05.884367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.439 [2024-04-26 14:09:05.889100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.439 [2024-04-26 14:09:05.889301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-26 14:09:05.889327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.439 [2024-04-26 14:09:05.894300] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.439 [2024-04-26 14:09:05.894507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-26 14:09:05.894639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.439 [2024-04-26 14:09:05.899810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.439 [2024-04-26 14:09:05.900036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-26 14:09:05.900187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.439 [2024-04-26 14:09:05.905286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.439 [2024-04-26 14:09:05.905490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-26 14:09:05.905675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.439 [2024-04-26 14:09:05.910763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.439 [2024-04-26 14:09:05.910977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-26 14:09:05.911106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.439 [2024-04-26 14:09:05.916124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.439 [2024-04-26 14:09:05.916360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-26 14:09:05.916548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.439 [2024-04-26 14:09:05.921706] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.439 [2024-04-26 14:09:05.921908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-26 14:09:05.921934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.439 [2024-04-26 14:09:05.926527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.439 [2024-04-26 14:09:05.926595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-26 14:09:05.926616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.439 [2024-04-26 14:09:05.930899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.439 [2024-04-26 14:09:05.930965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.439 [2024-04-26 14:09:05.930991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.439 [2024-04-26 14:09:05.936115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:05.936195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:05.936222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:05.941221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:05.941283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:05.941308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:05.945450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:05.945508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:05.945531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:05.951576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:05.951641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:05.951667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:05.957574] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:05.957798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:05.958002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:05.962518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:05.962748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:05.962921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:05.968379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:05.968446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:05.968472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:05.974430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:05.974489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:05.974509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:05.979904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:05.979970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:05.979996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:05.983628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:05.983685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:05.983705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:05.989875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:05.989939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:05.989965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:05.996065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:05.996124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:05.996145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:06.001791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:06.001870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:06.001897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:06.005638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:06.005702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:06.005728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:06.011236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:06.011296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:06.011321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:06.017260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:06.017326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:06.017350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:06.023068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:06.023134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:06.023182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:06.028734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:06.028792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:06.028811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:06.034694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:06.034759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:06.034785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:06.040273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:06.040335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:06.040361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:06.046189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:06.046253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:06.046280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:06.052032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:06.052089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:06.052109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:06.056982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:06.057037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:06.057055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:06.062329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:06.062380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:06.062398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:06.067426] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:06.067483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:06.067501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:06.072467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:06.072520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:06.072538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.440 [2024-04-26 14:09:06.077278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.440 [2024-04-26 14:09:06.077326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.440 [2024-04-26 14:09:06.077343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.441 [2024-04-26 14:09:06.082313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.441 [2024-04-26 14:09:06.082362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.441 [2024-04-26 14:09:06.082379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.441 [2024-04-26 14:09:06.087283] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.441 [2024-04-26 14:09:06.087334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.441 [2024-04-26 14:09:06.087351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.441 [2024-04-26 14:09:06.092210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.441 [2024-04-26 14:09:06.092262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.441 [2024-04-26 14:09:06.092279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.441 [2024-04-26 14:09:06.097208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.441 [2024-04-26 14:09:06.097259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.441 [2024-04-26 14:09:06.097276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.441 [2024-04-26 14:09:06.102292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.441 [2024-04-26 14:09:06.102339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.441 [2024-04-26 14:09:06.102357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.441 [2024-04-26 14:09:06.107273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.441 [2024-04-26 14:09:06.107323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.441 [2024-04-26 14:09:06.107341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.112076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.112129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.112147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.117115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.117177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.117195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.122107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.122173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.122192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.126938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.126992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.127010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.131933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.131999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.132017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.136981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.137037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.137054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.142060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.142111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.142129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.147091] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.147147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.147177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.152042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.152095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.152113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.156965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.157018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.157036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.161945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.161994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.162011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.166894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.167072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.167095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.171994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.172049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.172066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.176784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.176834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.176852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.181640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.181693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.181709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.186666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.186720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.186737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.191679] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.191879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.192050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.197023] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.197200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.197223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.202239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.202290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.701 [2024-04-26 14:09:06.202307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.701 [2024-04-26 14:09:06.207278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.701 [2024-04-26 14:09:06.207325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.207342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.212258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.212308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.212326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.217187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.217236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.217254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.222209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.222258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.222276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.227222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.227270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.227288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.232197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.232246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.232263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.236945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.236998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.237016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.241960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.242010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.242027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.246868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.247044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.247066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.251949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.252003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.252021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.256864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.256915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.256931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.261891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.261946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.261964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.266963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.267017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.267034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.271872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.271925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.271942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.276737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.276789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.276806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.281570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.281621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.281639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.286519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.286573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.286591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.291569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.291623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.291641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.296553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.296607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.296625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.301464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.301518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.301535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.306520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.306573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.306590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.311430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.311485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.311502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.316449] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.316501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.316536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.321401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.321454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.321471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.326436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.326485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.326502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.331332] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.331380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.331398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.336121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.336184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.702 [2024-04-26 14:09:06.336202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.702 [2024-04-26 14:09:06.341035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.702 [2024-04-26 14:09:06.341086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.703 [2024-04-26 14:09:06.341104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.703 [2024-04-26 14:09:06.346096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.703 [2024-04-26 14:09:06.346148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.703 [2024-04-26 14:09:06.346177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.703 [2024-04-26 14:09:06.351121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.703 [2024-04-26 14:09:06.351184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.703 [2024-04-26 14:09:06.351202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.703 [2024-04-26 14:09:06.356117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.703 [2024-04-26 14:09:06.356182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.703 [2024-04-26 14:09:06.356200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.703 [2024-04-26 14:09:06.360973] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.703 [2024-04-26 14:09:06.361027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.703 [2024-04-26 14:09:06.361044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.703 [2024-04-26 14:09:06.365938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.703 [2024-04-26 14:09:06.365986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.703 [2024-04-26 14:09:06.366004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.703 [2024-04-26 14:09:06.370794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.703 [2024-04-26 14:09:06.370844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.703 [2024-04-26 14:09:06.370862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.962 [2024-04-26 14:09:06.375860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.962 [2024-04-26 14:09:06.376054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.962 [2024-04-26 14:09:06.376208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.962 [2024-04-26 14:09:06.381093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.962 [2024-04-26 14:09:06.381145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.962 [2024-04-26 14:09:06.381175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.386180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.386228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.386246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.391212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.391261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.391278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.396204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.396254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.396271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.401027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.401080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.401097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.405934] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.405985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.406002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.410910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.410963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.410980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.415811] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.415981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.416003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.420764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.420817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.420835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.425770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.425820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.425849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.430515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.430567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.430585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.435502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.435699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.435877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.440938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.441125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.441263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.446263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.446311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.446329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.451242] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.451292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.451309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.456132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.456196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.456214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.461030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.461080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.461098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.465994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.466047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.466064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.470870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.470925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.470943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.475881] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.476085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.476239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.481179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.481232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.481250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.486228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.486279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.486296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.491075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.491130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.491147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.496104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.496167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.496186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.500898] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.500947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.500964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.505993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.506175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.506197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.511048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.511103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.963 [2024-04-26 14:09:06.511121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.963 [2024-04-26 14:09:06.516130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.963 [2024-04-26 14:09:06.516192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.516210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.521124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.521187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.521205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.526310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.526362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.526379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.531335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.531400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.531417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.536323] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.536370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.536388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.541408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.541455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.541474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.546301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.546352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.546369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.551259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.551307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.551325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.556295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.556342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.556360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.561341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.561392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.561409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.566390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.566444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.566462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.571374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.571426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.571444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.576326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.576372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.576390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.581230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.581277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.581294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.586318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.586367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.586384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.591254] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.591320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.591338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.596317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.596364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.596382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.601262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.601313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.601331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.606121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.606181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.606201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.610884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.610938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.610956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.615903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.616103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.616128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.620977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.621032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.621050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.625942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.625993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.626011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:26.964 [2024-04-26 14:09:06.630961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:26.964 [2024-04-26 14:09:06.631014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.964 [2024-04-26 14:09:06.631031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.223 [2024-04-26 14:09:06.635941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.635995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.636012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.640731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.640784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.640801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.645635] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.645688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.645705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.650614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.650664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.650682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.655515] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.655567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.655584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.660430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.660483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.660501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.665595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.665651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.665671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.670713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.670763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.670781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.675386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.675438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.675456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.680417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.680470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.680488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.685344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.685391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.685408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.690351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.690400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.690417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.695339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.695403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.695422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.700318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.700368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.700385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.705135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.705201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.705219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.710174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.710220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.710238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.714937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.714990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.715007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.719969] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.720022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.720039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.724956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.725010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.725027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.729964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.730014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.730031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.734987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.735041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.735058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.740024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.740078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.740096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.224 [2024-04-26 14:09:06.744930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:26:27.224 [2024-04-26 14:09:06.744985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.224 [2024-04-26 14:09:06.745004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.224 00:26:27.224 Latency(us) 00:26:27.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.224 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:27.224 nvme0n1 : 2.00 5783.80 722.97 0.00 0.00 2762.64 947.51 7001.03 00:26:27.224 =================================================================================================================== 00:26:27.224 Total : 5783.80 722.97 0.00 0.00 2762.64 947.51 7001.03 00:26:27.224 0 00:26:27.224 14:09:06 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:27.224 14:09:06 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:27.224 14:09:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:27.224 14:09:06 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:27.224 | .driver_specific 00:26:27.224 | .nvme_error 00:26:27.224 | .status_code 00:26:27.224 | .command_transient_transport_error' 00:26:27.483 14:09:06 -- host/digest.sh@71 -- # (( 373 > 0 )) 00:26:27.483 14:09:06 -- host/digest.sh@73 -- # killprocess 88269 00:26:27.483 14:09:06 -- common/autotest_common.sh@936 -- # '[' -z 88269 ']' 00:26:27.483 14:09:06 -- common/autotest_common.sh@940 -- # kill -0 88269 00:26:27.483 14:09:06 -- common/autotest_common.sh@941 -- # uname 00:26:27.483 14:09:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:27.483 14:09:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88269 00:26:27.483 killing process with pid 88269 00:26:27.483 Received shutdown signal, test time was about 2.000000 seconds 00:26:27.483 00:26:27.483 Latency(us) 00:26:27.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.483 =================================================================================================================== 00:26:27.483 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:27.483 14:09:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:27.483 14:09:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:27.483 14:09:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88269' 00:26:27.483 14:09:07 -- common/autotest_common.sh@955 -- # kill 88269 00:26:27.483 14:09:07 -- common/autotest_common.sh@960 -- # wait 88269 00:26:28.859 14:09:08 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:28.859 14:09:08 -- host/digest.sh@54 -- # local rw bs qd 00:26:28.859 14:09:08 -- host/digest.sh@56 -- # rw=randwrite 00:26:28.859 14:09:08 -- host/digest.sh@56 -- # bs=4096 00:26:28.859 14:09:08 -- host/digest.sh@56 -- # qd=128 00:26:28.859 14:09:08 -- host/digest.sh@58 -- # bperfpid=88371 00:26:28.859 14:09:08 -- host/digest.sh@60 -- # waitforlisten 88371 /var/tmp/bperf.sock 00:26:28.859 14:09:08 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:28.859 14:09:08 -- common/autotest_common.sh@817 -- # '[' -z 88371 ']' 00:26:28.859 14:09:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:28.859 14:09:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:28.859 14:09:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:28.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:28.859 14:09:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:28.859 14:09:08 -- common/autotest_common.sh@10 -- # set +x 00:26:28.859 [2024-04-26 14:09:08.355708] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:28.859 [2024-04-26 14:09:08.356039] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88371 ] 00:26:28.859 [2024-04-26 14:09:08.526067] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.118 [2024-04-26 14:09:08.757132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.685 14:09:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:29.685 14:09:09 -- common/autotest_common.sh@850 -- # return 0 00:26:29.685 14:09:09 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:29.685 14:09:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:29.943 14:09:09 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:29.943 14:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:29.943 14:09:09 -- common/autotest_common.sh@10 -- # set +x 00:26:29.943 14:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:29.943 14:09:09 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.943 14:09:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:30.202 nvme0n1 00:26:30.202 14:09:09 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:30.202 14:09:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:30.202 14:09:09 -- common/autotest_common.sh@10 -- # set +x 00:26:30.202 14:09:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:30.202 14:09:09 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:30.202 14:09:09 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:30.202 Running I/O for 2 seconds... 00:26:30.202 [2024-04-26 14:09:09.767615] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6458 00:26:30.202 [2024-04-26 14:09:09.768638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.202 [2024-04-26 14:09:09.768692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:30.202 [2024-04-26 14:09:09.780805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4de8 00:26:30.202 [2024-04-26 14:09:09.782434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.202 [2024-04-26 14:09:09.782485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:30.202 [2024-04-26 14:09:09.788551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:26:30.202 [2024-04-26 14:09:09.789271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.202 [2024-04-26 14:09:09.789318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:30.202 [2024-04-26 14:09:09.801493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1868 00:26:30.202 [2024-04-26 14:09:09.802894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.202 [2024-04-26 14:09:09.802947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:30.202 [2024-04-26 14:09:09.811728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc998 00:26:30.202 [2024-04-26 14:09:09.812786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.202 [2024-04-26 14:09:09.812837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:30.202 [2024-04-26 14:09:09.821789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195efae0 00:26:30.202 [2024-04-26 14:09:09.822585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.202 [2024-04-26 14:09:09.822634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:30.202 [2024-04-26 14:09:09.832997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df988 00:26:30.202 [2024-04-26 14:09:09.834093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.202 [2024-04-26 14:09:09.834146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:30.202 [2024-04-26 14:09:09.846151] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4b08 00:26:30.202 [2024-04-26 14:09:09.847844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.202 [2024-04-26 14:09:09.847895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.202 [2024-04-26 14:09:09.856067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e38d0 00:26:30.202 [2024-04-26 14:09:09.856978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.202 [2024-04-26 14:09:09.857027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.202 [2024-04-26 14:09:09.866886] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc128 00:26:30.202 [2024-04-26 14:09:09.867521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.202 [2024-04-26 14:09:09.867570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.533 [2024-04-26 14:09:09.877463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebb98 00:26:30.533 [2024-04-26 14:09:09.878119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.533 [2024-04-26 14:09:09.878179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.533 [2024-04-26 14:09:09.888412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1430 00:26:30.533 [2024-04-26 14:09:09.889484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.533 [2024-04-26 14:09:09.889533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:30.533 [2024-04-26 14:09:09.901360] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0ea0 00:26:30.533 [2024-04-26 14:09:09.903083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.533 [2024-04-26 14:09:09.903133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:30.533 [2024-04-26 14:09:09.910576] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee5c8 00:26:30.533 [2024-04-26 14:09:09.911437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.533 [2024-04-26 14:09:09.911485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:30.533 [2024-04-26 14:09:09.920993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:26:30.533 [2024-04-26 14:09:09.921962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.533 [2024-04-26 14:09:09.922012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.533 [2024-04-26 14:09:09.931723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa7d8 00:26:30.533 [2024-04-26 14:09:09.932681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.533 [2024-04-26 14:09:09.932731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.533 [2024-04-26 14:09:09.942380] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5220 00:26:30.533 [2024-04-26 14:09:09.943329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.533 [2024-04-26 14:09:09.943375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:30.533 [2024-04-26 14:09:09.952700] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa3a0 00:26:30.533 [2024-04-26 14:09:09.953282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.533 [2024-04-26 14:09:09.953329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:30.533 [2024-04-26 14:09:09.965791] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:26:30.533 [2024-04-26 14:09:09.967522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.533 [2024-04-26 14:09:09.967569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:30.533 [2024-04-26 14:09:09.973509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eea00 00:26:30.533 [2024-04-26 14:09:09.974350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.533 [2024-04-26 14:09:09.974398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:30.533 [2024-04-26 14:09:09.986499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2d80 00:26:30.533 [2024-04-26 14:09:09.987935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.533 [2024-04-26 14:09:09.987981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:30.533 [2024-04-26 14:09:09.996501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:26:30.533 [2024-04-26 14:09:09.997696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.533 [2024-04-26 14:09:09.997744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:30.533 [2024-04-26 14:09:10.007262] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:26:30.533 [2024-04-26 14:09:10.008537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.533 [2024-04-26 14:09:10.008585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:30.533 [2024-04-26 14:09:10.020290] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:26:30.533 [2024-04-26 14:09:10.022194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.533 [2024-04-26 14:09:10.022243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.533 [2024-04-26 14:09:10.028131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0bc0 00:26:30.534 [2024-04-26 14:09:10.029064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.534 [2024-04-26 14:09:10.029125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:30.534 [2024-04-26 14:09:10.039710] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1710 00:26:30.534 [2024-04-26 14:09:10.040616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.534 [2024-04-26 14:09:10.040666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.534 [2024-04-26 14:09:10.050447] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eff18 00:26:30.534 [2024-04-26 14:09:10.051345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.534 [2024-04-26 14:09:10.051392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:30.534 [2024-04-26 14:09:10.060623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9f68 00:26:30.534 [2024-04-26 14:09:10.061409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.534 [2024-04-26 14:09:10.061458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:30.534 [2024-04-26 14:09:10.071966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec840 00:26:30.534 [2024-04-26 14:09:10.073550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.534 [2024-04-26 14:09:10.073597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.534 [2024-04-26 14:09:10.083121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5be8 00:26:30.534 [2024-04-26 14:09:10.084200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.534 [2024-04-26 14:09:10.084248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.534 [2024-04-26 14:09:10.094261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4298 00:26:30.534 [2024-04-26 14:09:10.095636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.534 [2024-04-26 14:09:10.095684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:30.534 [2024-04-26 14:09:10.104699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dfdc0 00:26:30.534 [2024-04-26 14:09:10.105555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.534 [2024-04-26 14:09:10.105604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:30.534 [2024-04-26 14:09:10.115296] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9e10 00:26:30.534 [2024-04-26 14:09:10.116122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.534 [2024-04-26 14:09:10.116184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:30.534 [2024-04-26 14:09:10.125908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb760 00:26:30.534 [2024-04-26 14:09:10.126738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.534 [2024-04-26 14:09:10.126785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.534 [2024-04-26 14:09:10.136318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de8a8 00:26:30.534 [2024-04-26 14:09:10.137118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.534 [2024-04-26 14:09:10.137179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.534 [2024-04-26 14:09:10.146805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1ca0 00:26:30.534 [2024-04-26 14:09:10.147883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.534 [2024-04-26 14:09:10.147930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:30.534 [2024-04-26 14:09:10.159746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eee38 00:26:30.534 [2024-04-26 14:09:10.161464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.534 [2024-04-26 14:09:10.161509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:30.534 [2024-04-26 14:09:10.167462] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc998 00:26:30.534 [2024-04-26 14:09:10.168274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.534 [2024-04-26 14:09:10.168322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:30.534 [2024-04-26 14:09:10.180389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1430 00:26:30.534 [2024-04-26 14:09:10.181813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.534 [2024-04-26 14:09:10.181871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:30.534 [2024-04-26 14:09:10.190434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0a68 00:26:30.534 [2024-04-26 14:09:10.191604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.534 [2024-04-26 14:09:10.191652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:30.534 [2024-04-26 14:09:10.200945] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:26:30.534 [2024-04-26 14:09:10.202111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.534 [2024-04-26 14:09:10.202170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.793 [2024-04-26 14:09:10.213885] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e27f0 00:26:30.793 [2024-04-26 14:09:10.215658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-04-26 14:09:10.215704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.793 [2024-04-26 14:09:10.221590] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:26:30.793 [2024-04-26 14:09:10.222482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-04-26 14:09:10.222530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.793 [2024-04-26 14:09:10.234499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa7d8 00:26:30.793 [2024-04-26 14:09:10.235993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.793 [2024-04-26 14:09:10.236038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.244561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1710 00:26:30.794 [2024-04-26 14:09:10.245784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.245842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.255073] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5658 00:26:30.794 [2024-04-26 14:09:10.256308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.256354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.265593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed920 00:26:30.794 [2024-04-26 14:09:10.266303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.266350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.278751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1868 00:26:30.794 [2024-04-26 14:09:10.280599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.280645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.286524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6300 00:26:30.794 [2024-04-26 14:09:10.287494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.287541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.297082] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195feb58 00:26:30.794 [2024-04-26 14:09:10.297924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.297973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.309071] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e01f8 00:26:30.794 [2024-04-26 14:09:10.310335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.310385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.319092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195edd58 00:26:30.794 [2024-04-26 14:09:10.320084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.320133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.329106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f57b0 00:26:30.794 [2024-04-26 14:09:10.329771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.329820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.339768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef6a8 00:26:30.794 [2024-04-26 14:09:10.340430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.340478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.352588] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1f80 00:26:30.794 [2024-04-26 14:09:10.354031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.354081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.362616] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:26:30.794 [2024-04-26 14:09:10.363807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.363855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.372653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:26:30.794 [2024-04-26 14:09:10.373504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.373552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.383808] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:26:30.794 [2024-04-26 14:09:10.384968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.385015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.396751] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0bc0 00:26:30.794 [2024-04-26 14:09:10.398542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.398589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.404479] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:26:30.794 [2024-04-26 14:09:10.405367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.405413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.416958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:26:30.794 [2024-04-26 14:09:10.418172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.418220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.426863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:26:30.794 [2024-04-26 14:09:10.427934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.427981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.439910] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2510 00:26:30.794 [2024-04-26 14:09:10.441755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.441801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.449166] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195efae0 00:26:30.794 [2024-04-26 14:09:10.450124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.450187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:30.794 [2024-04-26 14:09:10.459624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e8d30 00:26:30.794 [2024-04-26 14:09:10.460695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:30.794 [2024-04-26 14:09:10.460743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.470793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5a90 00:26:31.054 [2024-04-26 14:09:10.472173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.472220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.481308] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:26:31.054 [2024-04-26 14:09:10.482145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.482207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.491619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7970 00:26:31.054 [2024-04-26 14:09:10.492442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.492490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.501705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6890 00:26:31.054 [2024-04-26 14:09:10.502494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.502542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.514539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:26:31.054 [2024-04-26 14:09:10.516087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.516133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.522280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:26:31.054 [2024-04-26 14:09:10.522937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.522984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.535197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:26:31.054 [2024-04-26 14:09:10.536473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.536521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.545240] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4b08 00:26:31.054 [2024-04-26 14:09:10.546262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.546311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.555703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7100 00:26:31.054 [2024-04-26 14:09:10.556720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.556766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.568205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3060 00:26:31.054 [2024-04-26 14:09:10.569516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.569565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.579344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:26:31.054 [2024-04-26 14:09:10.580959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.581004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.587083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195edd58 00:26:31.054 [2024-04-26 14:09:10.587820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.587867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.599999] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7da8 00:26:31.054 [2024-04-26 14:09:10.601367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.601413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.610023] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1f80 00:26:31.054 [2024-04-26 14:09:10.611172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.611217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.620556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ea248 00:26:31.054 [2024-04-26 14:09:10.621635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.621682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.633491] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:26:31.054 [2024-04-26 14:09:10.635202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.635247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.641205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa3a0 00:26:31.054 [2024-04-26 14:09:10.642010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.642058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.654126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fda78 00:26:31.054 [2024-04-26 14:09:10.655551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.655597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.664128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef270 00:26:31.054 [2024-04-26 14:09:10.665325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.665372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.674655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:26:31.054 [2024-04-26 14:09:10.675805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.675854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.687545] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6cc8 00:26:31.054 [2024-04-26 14:09:10.689323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.689367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.695294] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df988 00:26:31.054 [2024-04-26 14:09:10.696201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.696245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.705846] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e7818 00:26:31.054 [2024-04-26 14:09:10.706609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.706656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:31.054 [2024-04-26 14:09:10.718855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebb98 00:26:31.054 [2024-04-26 14:09:10.720374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.054 [2024-04-26 14:09:10.720419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:31.313 [2024-04-26 14:09:10.728044] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:26:31.313 [2024-04-26 14:09:10.728706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.313 [2024-04-26 14:09:10.728753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:31.313 [2024-04-26 14:09:10.738759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5a90 00:26:31.313 [2024-04-26 14:09:10.739415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.313 [2024-04-26 14:09:10.739461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:31.313 [2024-04-26 14:09:10.749463] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df988 00:26:31.313 [2024-04-26 14:09:10.750119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.313 [2024-04-26 14:09:10.750181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:31.313 [2024-04-26 14:09:10.760368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e7818 00:26:31.313 [2024-04-26 14:09:10.761438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.313 [2024-04-26 14:09:10.761485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:31.313 [2024-04-26 14:09:10.770770] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fdeb0 00:26:31.314 [2024-04-26 14:09:10.771316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.771357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.783889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4b08 00:26:31.314 [2024-04-26 14:09:10.785569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.785613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.791612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa7d8 00:26:31.314 [2024-04-26 14:09:10.792406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.792452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.804502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5ec8 00:26:31.314 [2024-04-26 14:09:10.805911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.805959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.815032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:26:31.314 [2024-04-26 14:09:10.816326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.816374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.825654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:26:31.314 [2024-04-26 14:09:10.826932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.826981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.835796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:26:31.314 [2024-04-26 14:09:10.836951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.836999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.846285] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:26:31.314 [2024-04-26 14:09:10.847398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.847445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.856705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7100 00:26:31.314 [2024-04-26 14:09:10.857295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.857343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.866329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:26:31.314 [2024-04-26 14:09:10.867014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.867061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.879228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5220 00:26:31.314 [2024-04-26 14:09:10.880529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.880578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.889276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0630 00:26:31.314 [2024-04-26 14:09:10.890418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.890468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.900089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1ca0 00:26:31.314 [2024-04-26 14:09:10.901161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.901217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.913230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa7d8 00:26:31.314 [2024-04-26 14:09:10.914897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.914949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.921092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb048 00:26:31.314 [2024-04-26 14:09:10.921945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.921996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.933788] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee190 00:26:31.314 [2024-04-26 14:09:10.934879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.934930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.943768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e2c28 00:26:31.314 [2024-04-26 14:09:10.944759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.944808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.954438] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e7818 00:26:31.314 [2024-04-26 14:09:10.955409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.955456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.967427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb760 00:26:31.314 [2024-04-26 14:09:10.969008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.969052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:31.314 [2024-04-26 14:09:10.975150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee190 00:26:31.314 [2024-04-26 14:09:10.975867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.314 [2024-04-26 14:09:10.975913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:10.987419] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:26:31.573 [2024-04-26 14:09:10.989021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:10.989066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:10.998425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195edd58 00:26:31.573 [2024-04-26 14:09:10.999651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:10.999699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.011478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195feb58 00:26:31.573 [2024-04-26 14:09:11.013254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.013298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.019250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5658 00:26:31.573 [2024-04-26 14:09:11.020125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.020185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.032193] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ff3c8 00:26:31.573 [2024-04-26 14:09:11.033714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.033759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.042773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6738 00:26:31.573 [2024-04-26 14:09:11.044150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.044209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.052992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:26:31.573 [2024-04-26 14:09:11.054253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.054300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.063498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc128 00:26:31.573 [2024-04-26 14:09:11.064733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.064782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.076464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:26:31.573 [2024-04-26 14:09:11.078314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.078363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.084193] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195efae0 00:26:31.573 [2024-04-26 14:09:11.085134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.085190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.097092] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ecc78 00:26:31.573 [2024-04-26 14:09:11.098675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.098723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.104832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dece0 00:26:31.573 [2024-04-26 14:09:11.105513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.105562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.117734] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df550 00:26:31.573 [2024-04-26 14:09:11.119058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.119107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.128305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df118 00:26:31.573 [2024-04-26 14:09:11.129479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.129526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.138473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de8a8 00:26:31.573 [2024-04-26 14:09:11.139557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.139607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.148983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e8088 00:26:31.573 [2024-04-26 14:09:11.150010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.150060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.161963] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ff3c8 00:26:31.573 [2024-04-26 14:09:11.163623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.163670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.172527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6cc8 00:26:31.573 [2024-04-26 14:09:11.174045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.174094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.180405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:26:31.573 [2024-04-26 14:09:11.181131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.181189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.193382] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb048 00:26:31.573 [2024-04-26 14:09:11.194759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.194811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.203935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:26:31.573 [2024-04-26 14:09:11.205174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.205220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.214591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e38d0 00:26:31.573 [2024-04-26 14:09:11.215809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.215857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.224744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef6a8 00:26:31.573 [2024-04-26 14:09:11.225866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.225912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:31.573 [2024-04-26 14:09:11.235261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0bc0 00:26:31.573 [2024-04-26 14:09:11.236321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.573 [2024-04-26 14:09:11.236368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.248150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:26:31.832 [2024-04-26 14:09:11.249846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.249891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.255896] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df550 00:26:31.832 [2024-04-26 14:09:11.256685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.256733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.268791] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f92c0 00:26:31.832 [2024-04-26 14:09:11.270216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.270263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.278813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dece0 00:26:31.832 [2024-04-26 14:09:11.279997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.280045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.289310] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:26:31.832 [2024-04-26 14:09:11.290454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.290505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.302244] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaef0 00:26:31.832 [2024-04-26 14:09:11.303987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.304034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.309976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eff18 00:26:31.832 [2024-04-26 14:09:11.310835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.310881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.322950] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6300 00:26:31.832 [2024-04-26 14:09:11.324443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.324489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.332979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e95a0 00:26:31.832 [2024-04-26 14:09:11.334278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.334325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.343578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ea248 00:26:31.832 [2024-04-26 14:09:11.344789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.344835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.356592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:26:31.832 [2024-04-26 14:09:11.358435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.358476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.367230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e27f0 00:26:31.832 [2024-04-26 14:09:11.368933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.368978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.375129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ea680 00:26:31.832 [2024-04-26 14:09:11.376063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.376111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.388108] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:26:31.832 [2024-04-26 14:09:11.389668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.389713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.398719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e7c50 00:26:31.832 [2024-04-26 14:09:11.400188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.400232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.409426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa3a0 00:26:31.832 [2024-04-26 14:09:11.410854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.410904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.420080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eff18 00:26:31.832 [2024-04-26 14:09:11.421083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.421132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.429783] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:26:31.832 [2024-04-26 14:09:11.430905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.430954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.442792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f35f0 00:26:31.832 [2024-04-26 14:09:11.444531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.444577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.450596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5220 00:26:31.832 [2024-04-26 14:09:11.451440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.451486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.463562] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f81e0 00:26:31.832 [2024-04-26 14:09:11.465020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.465067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.473657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:26:31.832 [2024-04-26 14:09:11.474860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.474910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.484248] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa7d8 00:26:31.832 [2024-04-26 14:09:11.485432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.832 [2024-04-26 14:09:11.485478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:31.832 [2024-04-26 14:09:11.497222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9f68 00:26:31.833 [2024-04-26 14:09:11.499022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.833 [2024-04-26 14:09:11.499071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:31.833 [2024-04-26 14:09:11.504948] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:26:32.091 [2024-04-26 14:09:11.505889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.091 [2024-04-26 14:09:11.505935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:32.091 [2024-04-26 14:09:11.515569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4f40 00:26:32.091 [2024-04-26 14:09:11.516367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.091 [2024-04-26 14:09:11.516414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:32.091 [2024-04-26 14:09:11.528677] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e7818 00:26:32.091 [2024-04-26 14:09:11.530206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.091 [2024-04-26 14:09:11.530255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:32.091 [2024-04-26 14:09:11.538766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fda78 00:26:32.091 [2024-04-26 14:09:11.540045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.091 [2024-04-26 14:09:11.540094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:32.091 [2024-04-26 14:09:11.548818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6300 00:26:32.091 [2024-04-26 14:09:11.549751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.091 [2024-04-26 14:09:11.549798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:32.091 [2024-04-26 14:09:11.559976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3d08 00:26:32.091 [2024-04-26 14:09:11.561229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.091 [2024-04-26 14:09:11.561276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:32.091 [2024-04-26 14:09:11.570458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:26:32.091 [2024-04-26 14:09:11.571169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.091 [2024-04-26 14:09:11.571216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:32.091 [2024-04-26 14:09:11.581056] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:26:32.091 [2024-04-26 14:09:11.581757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.091 [2024-04-26 14:09:11.581807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:32.091 [2024-04-26 14:09:11.591723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f31b8 00:26:32.091 [2024-04-26 14:09:11.592416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.091 [2024-04-26 14:09:11.592462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:32.091 [2024-04-26 14:09:11.603652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ea248 00:26:32.091 [2024-04-26 14:09:11.605077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.092 [2024-04-26 14:09:11.605124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:32.092 [2024-04-26 14:09:11.612935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4298 00:26:32.092 [2024-04-26 14:09:11.613623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.092 [2024-04-26 14:09:11.613670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:32.092 [2024-04-26 14:09:11.623841] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:26:32.092 [2024-04-26 14:09:11.624938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.092 [2024-04-26 14:09:11.624986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:32.092 [2024-04-26 14:09:11.636798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebfd0 00:26:32.092 [2024-04-26 14:09:11.638566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.092 [2024-04-26 14:09:11.638616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:32.092 [2024-04-26 14:09:11.647469] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3498 00:26:32.092 [2024-04-26 14:09:11.649061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.092 [2024-04-26 14:09:11.649107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:32.092 [2024-04-26 14:09:11.657657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:26:32.092 [2024-04-26 14:09:11.659087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.092 [2024-04-26 14:09:11.659136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:32.092 [2024-04-26 14:09:11.666526] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5ec8 00:26:32.092 [2024-04-26 14:09:11.667105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.092 [2024-04-26 14:09:11.667168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:32.092 [2024-04-26 14:09:11.679953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef270 00:26:32.092 [2024-04-26 14:09:11.681693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.092 [2024-04-26 14:09:11.681740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:32.092 [2024-04-26 14:09:11.687733] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6b70 00:26:32.092 [2024-04-26 14:09:11.688586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.092 [2024-04-26 14:09:11.688632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:32.092 [2024-04-26 14:09:11.700741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec408 00:26:32.092 [2024-04-26 14:09:11.702213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.092 [2024-04-26 14:09:11.702261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:32.092 [2024-04-26 14:09:11.710847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:26:32.092 [2024-04-26 14:09:11.712049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.092 [2024-04-26 14:09:11.712098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:32.092 [2024-04-26 14:09:11.721389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e8088 00:26:32.092 [2024-04-26 14:09:11.722581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.092 [2024-04-26 14:09:11.722630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:32.092 [2024-04-26 14:09:11.734393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:26:32.092 [2024-04-26 14:09:11.736201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.092 [2024-04-26 14:09:11.736240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:32.092 [2024-04-26 14:09:11.742112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eee38 00:26:32.092 [2024-04-26 14:09:11.743028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.092 [2024-04-26 14:09:11.743075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:32.092 [2024-04-26 14:09:11.755054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1868 00:26:32.092 [2024-04-26 14:09:11.756591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.092 [2024-04-26 14:09:11.756635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:32.092 00:26:32.092 Latency(us) 00:26:32.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.092 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:32.092 nvme0n1 : 2.01 23569.95 92.07 0.00 0.00 5425.14 2381.93 14528.46 00:26:32.092 =================================================================================================================== 00:26:32.092 Total : 23569.95 92.07 0.00 0.00 5425.14 2381.93 14528.46 00:26:32.092 0 00:26:32.351 14:09:11 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:32.351 14:09:11 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:32.351 14:09:11 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:32.351 | .driver_specific 00:26:32.351 | .nvme_error 00:26:32.351 | .status_code 00:26:32.351 | .command_transient_transport_error' 00:26:32.351 14:09:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:32.351 14:09:11 -- host/digest.sh@71 -- # (( 185 > 0 )) 00:26:32.351 14:09:11 -- host/digest.sh@73 -- # killprocess 88371 00:26:32.351 14:09:11 -- common/autotest_common.sh@936 -- # '[' -z 88371 ']' 00:26:32.351 14:09:11 -- common/autotest_common.sh@940 -- # kill -0 88371 00:26:32.351 14:09:11 -- common/autotest_common.sh@941 -- # uname 00:26:32.351 14:09:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:32.351 14:09:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88371 00:26:32.609 killing process with pid 88371 00:26:32.609 Received shutdown signal, test time was about 2.000000 seconds 00:26:32.609 00:26:32.609 Latency(us) 00:26:32.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:32.609 =================================================================================================================== 00:26:32.609 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:32.609 14:09:12 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:32.609 14:09:12 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:32.609 14:09:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88371' 00:26:32.609 14:09:12 -- common/autotest_common.sh@955 -- # kill 88371 00:26:32.609 14:09:12 -- common/autotest_common.sh@960 -- # wait 88371 00:26:33.543 14:09:13 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:33.543 14:09:13 -- host/digest.sh@54 -- # local rw bs qd 00:26:33.543 14:09:13 -- host/digest.sh@56 -- # rw=randwrite 00:26:33.543 14:09:13 -- host/digest.sh@56 -- # bs=131072 00:26:33.543 14:09:13 -- host/digest.sh@56 -- # qd=16 00:26:33.543 14:09:13 -- host/digest.sh@58 -- # bperfpid=88463 00:26:33.543 14:09:13 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:33.543 14:09:13 -- host/digest.sh@60 -- # waitforlisten 88463 /var/tmp/bperf.sock 00:26:33.543 14:09:13 -- common/autotest_common.sh@817 -- # '[' -z 88463 ']' 00:26:33.543 14:09:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:33.543 14:09:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:33.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:33.543 14:09:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:33.543 14:09:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:33.543 14:09:13 -- common/autotest_common.sh@10 -- # set +x 00:26:33.543 [2024-04-26 14:09:13.125653] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:33.543 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:33.543 Zero copy mechanism will not be used. 00:26:33.543 [2024-04-26 14:09:13.125805] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88463 ] 00:26:33.801 [2024-04-26 14:09:13.297276] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.059 [2024-04-26 14:09:13.535371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.317 14:09:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:34.318 14:09:13 -- common/autotest_common.sh@850 -- # return 0 00:26:34.318 14:09:13 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:34.318 14:09:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:34.575 14:09:14 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:34.575 14:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.575 14:09:14 -- common/autotest_common.sh@10 -- # set +x 00:26:34.575 14:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.575 14:09:14 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:34.575 14:09:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:34.834 nvme0n1 00:26:34.834 14:09:14 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:34.834 14:09:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:34.834 14:09:14 -- common/autotest_common.sh@10 -- # set +x 00:26:34.834 14:09:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:34.834 14:09:14 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:34.834 14:09:14 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:35.093 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:35.093 Zero copy mechanism will not be used. 00:26:35.093 Running I/O for 2 seconds... 00:26:35.093 [2024-04-26 14:09:14.547457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.093 [2024-04-26 14:09:14.547901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.093 [2024-04-26 14:09:14.547954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.093 [2024-04-26 14:09:14.552582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.093 [2024-04-26 14:09:14.553008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.093 [2024-04-26 14:09:14.553061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.093 [2024-04-26 14:09:14.557465] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.093 [2024-04-26 14:09:14.557891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.093 [2024-04-26 14:09:14.557942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.093 [2024-04-26 14:09:14.562343] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.093 [2024-04-26 14:09:14.562746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.093 [2024-04-26 14:09:14.562795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.093 [2024-04-26 14:09:14.567136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.093 [2024-04-26 14:09:14.567574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.093 [2024-04-26 14:09:14.567622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.093 [2024-04-26 14:09:14.571909] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.093 [2024-04-26 14:09:14.572341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.093 [2024-04-26 14:09:14.572390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.093 [2024-04-26 14:09:14.576530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.093 [2024-04-26 14:09:14.576950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.093 [2024-04-26 14:09:14.576999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.093 [2024-04-26 14:09:14.581455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.093 [2024-04-26 14:09:14.581877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.093 [2024-04-26 14:09:14.581926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.586306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.586717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.586759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.591148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.591571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.591618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.595928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.596340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.596384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.600795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.601223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.601270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.605655] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.606076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.606133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.610505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.610914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.610963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.615320] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.615728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.615776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.620146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.620562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.620610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.624978] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.625383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.625425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.629842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.630266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.630313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.634753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.635205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.635252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.639597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.640009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.640063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.644521] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.644925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.644974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.649250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.649663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.649710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.653925] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.654346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.654392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.658784] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.659205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.659247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.663479] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.663887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.663936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.668283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.668683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.668735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.673111] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.673522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.673570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.677881] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.678309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.678350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.682761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.683191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.683238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.687651] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.688069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.688118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.692606] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.693024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.693075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.697578] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.697998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.698052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.702490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.702893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.702942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.707392] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.094 [2024-04-26 14:09:14.707813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.094 [2024-04-26 14:09:14.707862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.094 [2024-04-26 14:09:14.712250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.095 [2024-04-26 14:09:14.712686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.095 [2024-04-26 14:09:14.712734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.095 [2024-04-26 14:09:14.717089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.095 [2024-04-26 14:09:14.717528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.095 [2024-04-26 14:09:14.717576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.095 [2024-04-26 14:09:14.721992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.095 [2024-04-26 14:09:14.722426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.095 [2024-04-26 14:09:14.722476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.095 [2024-04-26 14:09:14.726837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.095 [2024-04-26 14:09:14.727266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.095 [2024-04-26 14:09:14.727315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.095 [2024-04-26 14:09:14.731796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.095 [2024-04-26 14:09:14.732227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.095 [2024-04-26 14:09:14.732274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.095 [2024-04-26 14:09:14.736757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.095 [2024-04-26 14:09:14.737224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.095 [2024-04-26 14:09:14.737270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.095 [2024-04-26 14:09:14.741688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.095 [2024-04-26 14:09:14.742120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.095 [2024-04-26 14:09:14.742189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.095 [2024-04-26 14:09:14.746537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.095 [2024-04-26 14:09:14.746944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.095 [2024-04-26 14:09:14.746993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.095 [2024-04-26 14:09:14.751431] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.095 [2024-04-26 14:09:14.751841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.095 [2024-04-26 14:09:14.751890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.095 [2024-04-26 14:09:14.756344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.095 [2024-04-26 14:09:14.756762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.095 [2024-04-26 14:09:14.756810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.095 [2024-04-26 14:09:14.761197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.095 [2024-04-26 14:09:14.761612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.095 [2024-04-26 14:09:14.761659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.095 [2024-04-26 14:09:14.766135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.095 [2024-04-26 14:09:14.766560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.355 [2024-04-26 14:09:14.766607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.355 [2024-04-26 14:09:14.770994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.355 [2024-04-26 14:09:14.771436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.355 [2024-04-26 14:09:14.771478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.355 [2024-04-26 14:09:14.775842] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.355 [2024-04-26 14:09:14.776276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.355 [2024-04-26 14:09:14.776324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.355 [2024-04-26 14:09:14.780721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.355 [2024-04-26 14:09:14.781137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.355 [2024-04-26 14:09:14.781199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.355 [2024-04-26 14:09:14.785466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.355 [2024-04-26 14:09:14.785884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.355 [2024-04-26 14:09:14.785939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.355 [2024-04-26 14:09:14.790302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.355 [2024-04-26 14:09:14.790708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.355 [2024-04-26 14:09:14.790753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.355 [2024-04-26 14:09:14.795184] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.355 [2024-04-26 14:09:14.795580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.355 [2024-04-26 14:09:14.795629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.355 [2024-04-26 14:09:14.800064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.355 [2024-04-26 14:09:14.800497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.355 [2024-04-26 14:09:14.800553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.355 [2024-04-26 14:09:14.804929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.355 [2024-04-26 14:09:14.805350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.355 [2024-04-26 14:09:14.805397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.355 [2024-04-26 14:09:14.809814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.355 [2024-04-26 14:09:14.810244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.355 [2024-04-26 14:09:14.810285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.355 [2024-04-26 14:09:14.814650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.355 [2024-04-26 14:09:14.815080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.355 [2024-04-26 14:09:14.815129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.355 [2024-04-26 14:09:14.819474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.355 [2024-04-26 14:09:14.819889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.355 [2024-04-26 14:09:14.819938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.355 [2024-04-26 14:09:14.824321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.355 [2024-04-26 14:09:14.824723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.355 [2024-04-26 14:09:14.824776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.355 [2024-04-26 14:09:14.829209] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.355 [2024-04-26 14:09:14.829614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.355 [2024-04-26 14:09:14.829661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.355 [2024-04-26 14:09:14.834016] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.355 [2024-04-26 14:09:14.834441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.355 [2024-04-26 14:09:14.834490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.355 [2024-04-26 14:09:14.838848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.355 [2024-04-26 14:09:14.839278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.355 [2024-04-26 14:09:14.839329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.355 [2024-04-26 14:09:14.843721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.355 [2024-04-26 14:09:14.844134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.355 [2024-04-26 14:09:14.844194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.848511] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.848922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.848977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.853305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.853717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.853764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.858019] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.858446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.858493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.862723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.863136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.863200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.867539] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.867951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.868001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.872423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.872833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.872882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.877282] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.877686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.877735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.882139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.882553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.882601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.886928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.887357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.887400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.891748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.892174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.892221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.896600] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.896995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.897050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.901393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.901789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.901847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.906199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.906609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.906656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.911001] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.911410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.911452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.915745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.916147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.916211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.920481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.920889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.920937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.925304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.925716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.925765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.930113] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.930515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.930562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.934839] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.935265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.935308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.939599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.940010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.940059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.944455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.944866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.944915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.949306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.949701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.949743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.954131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.954536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.954583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.958901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.959330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.959371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.963697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.964102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.964165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.968418] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.968824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.968876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.356 [2024-04-26 14:09:14.973094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.356 [2024-04-26 14:09:14.973503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.356 [2024-04-26 14:09:14.973550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.357 [2024-04-26 14:09:14.977872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.357 [2024-04-26 14:09:14.978281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-04-26 14:09:14.978323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.357 [2024-04-26 14:09:14.982668] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.357 [2024-04-26 14:09:14.983077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-04-26 14:09:14.983126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.357 [2024-04-26 14:09:14.987484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.357 [2024-04-26 14:09:14.987889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-04-26 14:09:14.987939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-04-26 14:09:14.992226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.357 [2024-04-26 14:09:14.992627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-04-26 14:09:14.992675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.357 [2024-04-26 14:09:14.997054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.357 [2024-04-26 14:09:14.997485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-04-26 14:09:14.997530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.357 [2024-04-26 14:09:15.001777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.357 [2024-04-26 14:09:15.002224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-04-26 14:09:15.002271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.357 [2024-04-26 14:09:15.006564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.357 [2024-04-26 14:09:15.006966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-04-26 14:09:15.007014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.357 [2024-04-26 14:09:15.011315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.357 [2024-04-26 14:09:15.011732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-04-26 14:09:15.011777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.357 [2024-04-26 14:09:15.016131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.357 [2024-04-26 14:09:15.016557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-04-26 14:09:15.016604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.357 [2024-04-26 14:09:15.020906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.357 [2024-04-26 14:09:15.021326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-04-26 14:09:15.021380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.357 [2024-04-26 14:09:15.025739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.357 [2024-04-26 14:09:15.026176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.357 [2024-04-26 14:09:15.026223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.030505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.030898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.030947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.035225] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.035638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.035685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.039982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.040409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.040452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.044747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.045168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.045216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.049499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.049909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.049951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.054241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.054654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.054711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.059039] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.059469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.059515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.063853] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.064278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.064317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.068677] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.069078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.069127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.073511] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.073925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.073967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.078354] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.078762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.078810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.083147] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.083557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.083600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.087970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.088376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.088423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.092792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.093243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.093290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.097673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.098102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.098165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.102563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.102976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.103024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.107511] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.107896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.107945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.112321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.112720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.112767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.117177] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.117595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.117642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.122061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.617 [2024-04-26 14:09:15.122494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.617 [2024-04-26 14:09:15.122541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.617 [2024-04-26 14:09:15.126874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.127300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.127346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.131745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.132168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.132213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.136618] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.137027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.137076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.141501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.141917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.141965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.146334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.146729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.146775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.151071] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.151504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.151552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.156011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.156438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.156485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.160890] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.161314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.161366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.165698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.166120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.166187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.170601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.170989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.171038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.175376] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.175788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.175834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.180306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.180713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.180761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.185142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.185568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.185616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.189970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.190389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.190436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.194789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.195201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.195246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.199638] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.200053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.200102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.204561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.204981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.205033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.209373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.209784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.209843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.214207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.214622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.214669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.219083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.219505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.219553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.224012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.224449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.224496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.228915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.229343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.229391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.233896] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.234314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.234361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.238740] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.239173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.239221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.243672] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.244092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.244141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.248633] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.249053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.618 [2024-04-26 14:09:15.249104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.618 [2024-04-26 14:09:15.253570] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.618 [2024-04-26 14:09:15.253995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.619 [2024-04-26 14:09:15.254044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.619 [2024-04-26 14:09:15.258467] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.619 [2024-04-26 14:09:15.258882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.619 [2024-04-26 14:09:15.258931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.619 [2024-04-26 14:09:15.263425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.619 [2024-04-26 14:09:15.263833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.619 [2024-04-26 14:09:15.263877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.619 [2024-04-26 14:09:15.268309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.619 [2024-04-26 14:09:15.268726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.619 [2024-04-26 14:09:15.268775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.619 [2024-04-26 14:09:15.273235] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.619 [2024-04-26 14:09:15.273655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.619 [2024-04-26 14:09:15.273703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.619 [2024-04-26 14:09:15.278033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.619 [2024-04-26 14:09:15.278466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.619 [2024-04-26 14:09:15.278513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.619 [2024-04-26 14:09:15.282907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.619 [2024-04-26 14:09:15.283339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.619 [2024-04-26 14:09:15.283381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.619 [2024-04-26 14:09:15.287685] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.619 [2024-04-26 14:09:15.288098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.619 [2024-04-26 14:09:15.288146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.880 [2024-04-26 14:09:15.292479] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.880 [2024-04-26 14:09:15.292880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.880 [2024-04-26 14:09:15.292938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.880 [2024-04-26 14:09:15.297352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.880 [2024-04-26 14:09:15.297739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.297798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.302205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.302627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.302674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.307059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.307492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.307532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.311915] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.312326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.312374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.316776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.317206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.317252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.321681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.322107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.322167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.326501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.326913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.326961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.331432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.331841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.331897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.336349] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.336755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.336805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.341130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.341545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.341593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.345981] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.346408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.346450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.350793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.351213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.351260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.355579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.355973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.356017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.360367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.360781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.360830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.365245] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.365663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.365710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.370181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.370596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.370644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.374934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.375358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.375404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.379768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.380214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.380262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.384711] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.385119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.385182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.389609] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.390022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.390069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.394480] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.394893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.394941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.399368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.399780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.399829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.404250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.404662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.404714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.409117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.409538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.409585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.413987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.414403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.414445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.418799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.419215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.419261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.423528] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.881 [2024-04-26 14:09:15.423944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.881 [2024-04-26 14:09:15.423993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.881 [2024-04-26 14:09:15.428436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.428845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.428893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.433271] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.433664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.433714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.438079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.438511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.438559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.442905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.443329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.443371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.447780] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.448202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.448249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.452594] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.452991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.453038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.457425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.457847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.457896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.462242] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.462644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.462694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.466927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.467346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.467390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.471709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.472094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.472143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.476530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.476945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.476994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.481329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.481742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.481789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.486064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.486488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.486535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.490864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.491284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.491340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.495772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.496184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.496232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.500563] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.500976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.501026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.505378] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.505799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.505853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.510084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.510516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.510563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.514927] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.515362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.515407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.519802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.520226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.520273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.524586] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.524987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.525040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.529400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.529816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.529875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.534313] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.534727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.534774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.539194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.539609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.539656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.544068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.544502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.544549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:35.882 [2024-04-26 14:09:15.548918] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:35.882 [2024-04-26 14:09:15.549363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.882 [2024-04-26 14:09:15.549403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.142 [2024-04-26 14:09:15.553860] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.142 [2024-04-26 14:09:15.554315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.142 [2024-04-26 14:09:15.554364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.142 [2024-04-26 14:09:15.558777] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.142 [2024-04-26 14:09:15.559240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.142 [2024-04-26 14:09:15.559287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.142 [2024-04-26 14:09:15.563647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.142 [2024-04-26 14:09:15.564056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.142 [2024-04-26 14:09:15.564105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.142 [2024-04-26 14:09:15.568526] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.142 [2024-04-26 14:09:15.568939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.142 [2024-04-26 14:09:15.568988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.142 [2024-04-26 14:09:15.573448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.142 [2024-04-26 14:09:15.573879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.142 [2024-04-26 14:09:15.573946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.142 [2024-04-26 14:09:15.578412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.142 [2024-04-26 14:09:15.578828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.142 [2024-04-26 14:09:15.578876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.142 [2024-04-26 14:09:15.583326] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.142 [2024-04-26 14:09:15.583743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.142 [2024-04-26 14:09:15.583791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.142 [2024-04-26 14:09:15.588086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.142 [2024-04-26 14:09:15.588510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.142 [2024-04-26 14:09:15.588557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.592953] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.593383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.593427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.597856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.598303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.598351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.602848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.603273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.603317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.607706] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.608104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.608168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.612554] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.612970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.613020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.617436] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.617860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.617927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.622329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.622743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.622792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.627215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.627633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.627680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.632070] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.632521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.632568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.636967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.637395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.637446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.641926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.642355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.642401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.646812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.647232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.647276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.651664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.652080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.652129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.656489] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.656911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.656959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.661332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.661767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.661816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.666133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.666531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.666574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.670837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.671240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.671289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.675676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.676089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.676139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.680508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.680915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.680964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.685372] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.685770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.685831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.690198] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.690591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.690639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.695065] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.695479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.695538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.699985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.700398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.700455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.704905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.705341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.705382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.709863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.710317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.710365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.714830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.715253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.715300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.143 [2024-04-26 14:09:15.719732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.143 [2024-04-26 14:09:15.720131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.143 [2024-04-26 14:09:15.720192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.724641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.725051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.725101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.729573] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.730002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.730051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.734623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.735087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.735137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.739521] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.739950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.740001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.744352] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.744769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.744811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.749178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.749597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.749644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.754060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.754507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.754556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.758900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.759319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.759374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.763766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.764196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.764243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.768564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.768976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.769021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.773386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.773795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.773854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.778303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.778717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.778765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.783094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.783517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.783564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.788000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.788406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.788454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.792790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.793211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.793258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.797577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.797999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.798047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.802300] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.802707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.802754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.807149] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.807561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.807608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.144 [2024-04-26 14:09:15.811920] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.144 [2024-04-26 14:09:15.812348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.144 [2024-04-26 14:09:15.812401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.816810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.817236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.817285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.821684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.822108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.822168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.826508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.826924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.826973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.831366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.831762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.831810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.836180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.836602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.836649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.840995] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.841426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.841474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.845880] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.846304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.846352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.850681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.851098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.851146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.855527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.855924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.855979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.860351] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.860767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.860814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.865228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.865644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.865690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.870131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.870561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.870608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.874958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.875383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.875429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.879769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.880198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.880243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.884460] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.884886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.884935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.889287] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.889698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.889746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.894112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.894514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.894562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.898998] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.899413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.899453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.903796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.904215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.904257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.406 [2024-04-26 14:09:15.908579] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.406 [2024-04-26 14:09:15.908976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.406 [2024-04-26 14:09:15.909032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.913386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:15.913807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:15.913863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.918127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:15.918555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:15.918603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.922977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:15.923396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:15.923439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.927798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:15.928229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:15.928277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.932682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:15.933088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:15.933137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.937492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:15.937901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:15.937944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.942295] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:15.942713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:15.942760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.947069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:15.947499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:15.947541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.951816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:15.952241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:15.952289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.956593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:15.957004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:15.957052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.961405] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:15.961816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:15.961875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.966168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:15.966586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:15.966633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.971023] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:15.971452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:15.971503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.975863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:15.976272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:15.976317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.980619] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:15.981031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:15.981081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.985461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:15.985881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:15.985933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.990316] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:15.990733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:15.990780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.995129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:15.995556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:15.995602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:15.999902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:16.000319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:16.000372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:16.004779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:16.005210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:16.005257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:16.009627] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:16.010052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:16.010097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:16.014306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:16.014720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:16.014768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:16.019105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:16.019533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:16.019580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:16.023906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:16.024314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:16.024354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:16.028677] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:16.029090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:16.029139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:16.033458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:16.033882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.407 [2024-04-26 14:09:16.033930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.407 [2024-04-26 14:09:16.038304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.407 [2024-04-26 14:09:16.038712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.408 [2024-04-26 14:09:16.038760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.408 [2024-04-26 14:09:16.043125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.408 [2024-04-26 14:09:16.043535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.408 [2024-04-26 14:09:16.043582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.408 [2024-04-26 14:09:16.047861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.408 [2024-04-26 14:09:16.048265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.408 [2024-04-26 14:09:16.048305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.408 [2024-04-26 14:09:16.052649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.408 [2024-04-26 14:09:16.053065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.408 [2024-04-26 14:09:16.053115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.408 [2024-04-26 14:09:16.057477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.408 [2024-04-26 14:09:16.057905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.408 [2024-04-26 14:09:16.057956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.408 [2024-04-26 14:09:16.062264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.408 [2024-04-26 14:09:16.062679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.408 [2024-04-26 14:09:16.062726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.408 [2024-04-26 14:09:16.066961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.408 [2024-04-26 14:09:16.067396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.408 [2024-04-26 14:09:16.067443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.408 [2024-04-26 14:09:16.071769] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.408 [2024-04-26 14:09:16.072177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.408 [2024-04-26 14:09:16.072212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.408 [2024-04-26 14:09:16.076636] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.408 [2024-04-26 14:09:16.077050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.408 [2024-04-26 14:09:16.077099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.669 [2024-04-26 14:09:16.081474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.669 [2024-04-26 14:09:16.081894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-04-26 14:09:16.081948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.669 [2024-04-26 14:09:16.086143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.669 [2024-04-26 14:09:16.086564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-04-26 14:09:16.086611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.669 [2024-04-26 14:09:16.090856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.669 [2024-04-26 14:09:16.091294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-04-26 14:09:16.091336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.669 [2024-04-26 14:09:16.095626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.669 [2024-04-26 14:09:16.096044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-04-26 14:09:16.096094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.669 [2024-04-26 14:09:16.100450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.669 [2024-04-26 14:09:16.100861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-04-26 14:09:16.100909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.669 [2024-04-26 14:09:16.105220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.669 [2024-04-26 14:09:16.105633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-04-26 14:09:16.105681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.669 [2024-04-26 14:09:16.110041] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.669 [2024-04-26 14:09:16.110457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-04-26 14:09:16.110504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.669 [2024-04-26 14:09:16.114923] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.669 [2024-04-26 14:09:16.115346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-04-26 14:09:16.115379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.669 [2024-04-26 14:09:16.119791] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.669 [2024-04-26 14:09:16.120203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-04-26 14:09:16.120250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.669 [2024-04-26 14:09:16.124778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.669 [2024-04-26 14:09:16.125200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.669 [2024-04-26 14:09:16.125245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.129737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.130184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.130228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.134811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.135247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.135296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.139742] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.140206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.140253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.144807] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.145262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.145311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.149849] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.150299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.150347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.154854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.155270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.155330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.159828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.160264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.160311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.164728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.165168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.165214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.169650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.170074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.170125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.174615] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.175019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.175068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.179458] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.179861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.179904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.184377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.184782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.184824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.189331] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.189735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.189778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.194257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.194661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.194710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.199039] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.199452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.199495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.203988] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.204414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.204454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.208895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.209319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.209362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.213801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.214221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.214262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.218715] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.219117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.219180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.223529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.223935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.223984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.228444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.228841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.228890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.233346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.233764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.233812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.238276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.238689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.238737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.243139] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.243565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.243607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.247964] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.248378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.248421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.252776] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.670 [2024-04-26 14:09:16.253215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.670 [2024-04-26 14:09:16.253258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.670 [2024-04-26 14:09:16.257721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.671 [2024-04-26 14:09:16.258142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.671 [2024-04-26 14:09:16.258199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.671 [2024-04-26 14:09:16.262592] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.671 [2024-04-26 14:09:16.263006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.671 [2024-04-26 14:09:16.263054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.671 [2024-04-26 14:09:16.267437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.671 [2024-04-26 14:09:16.267843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.671 [2024-04-26 14:09:16.267893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.671 [2024-04-26 14:09:16.272362] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.671 [2024-04-26 14:09:16.272771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.671 [2024-04-26 14:09:16.272820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.671 [2024-04-26 14:09:16.277148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.671 [2024-04-26 14:09:16.277570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.671 [2024-04-26 14:09:16.277617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.671 [2024-04-26 14:09:16.281992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.671 [2024-04-26 14:09:16.282411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.671 [2024-04-26 14:09:16.282455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.671 [2024-04-26 14:09:16.286825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.671 [2024-04-26 14:09:16.287234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.671 [2024-04-26 14:09:16.287279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.671 [2024-04-26 14:09:16.291613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.671 [2024-04-26 14:09:16.292025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.671 [2024-04-26 14:09:16.292074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.671 [2024-04-26 14:09:16.296491] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.671 [2024-04-26 14:09:16.296882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.671 [2024-04-26 14:09:16.296927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.671 [2024-04-26 14:09:16.301377] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.671 [2024-04-26 14:09:16.301766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.671 [2024-04-26 14:09:16.301811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.671 [2024-04-26 14:09:16.306321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.671 [2024-04-26 14:09:16.306711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.671 [2024-04-26 14:09:16.306762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.671 [2024-04-26 14:09:16.311267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.671 [2024-04-26 14:09:16.311663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.671 [2024-04-26 14:09:16.311710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.671 [2024-04-26 14:09:16.316217] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.671 [2024-04-26 14:09:16.316609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.671 [2024-04-26 14:09:16.316644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.671 [2024-04-26 14:09:16.321198] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.671 [2024-04-26 14:09:16.321586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.671 [2024-04-26 14:09:16.321632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.671 [2024-04-26 14:09:16.326095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.671 [2024-04-26 14:09:16.326495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.671 [2024-04-26 14:09:16.326546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.671 [2024-04-26 14:09:16.331028] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.671 [2024-04-26 14:09:16.331452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.671 [2024-04-26 14:09:16.331496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.671 [2024-04-26 14:09:16.335910] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.671 [2024-04-26 14:09:16.336336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.671 [2024-04-26 14:09:16.336384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.671 [2024-04-26 14:09:16.340895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.671 [2024-04-26 14:09:16.341298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.671 [2024-04-26 14:09:16.341340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.345762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.346165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.346207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.350713] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.351102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.351170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.355639] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.356049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.356108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.360570] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.360949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.361006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.365474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.365893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.365937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.370376] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.370767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.370812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.375257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.375646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.375691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.380095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.380501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.380550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.385053] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.385467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.385520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.390028] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.390438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.390487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.394991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.395397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.395440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.399901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.400313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.400354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.404872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.405271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.405316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.409785] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.410202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.410240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.414659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.415053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.415100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.419553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.419954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.420005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.424443] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.424846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.424896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.429354] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.429751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.429807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.434195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.434592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.434643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.438993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.439405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.439447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.443836] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.444256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.444297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.448778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.932 [2024-04-26 14:09:16.449187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.932 [2024-04-26 14:09:16.449221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.932 [2024-04-26 14:09:16.453612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.933 [2024-04-26 14:09:16.454020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.933 [2024-04-26 14:09:16.454062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.933 [2024-04-26 14:09:16.458497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.933 [2024-04-26 14:09:16.458884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.933 [2024-04-26 14:09:16.458940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.933 [2024-04-26 14:09:16.463406] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.933 [2024-04-26 14:09:16.463796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.933 [2024-04-26 14:09:16.463842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.933 [2024-04-26 14:09:16.468259] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.933 [2024-04-26 14:09:16.468665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.933 [2024-04-26 14:09:16.468713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.933 [2024-04-26 14:09:16.473088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.933 [2024-04-26 14:09:16.473494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.933 [2024-04-26 14:09:16.473543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.933 [2024-04-26 14:09:16.477971] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.933 [2024-04-26 14:09:16.478385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.933 [2024-04-26 14:09:16.478428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.933 [2024-04-26 14:09:16.482760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.933 [2024-04-26 14:09:16.483149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.933 [2024-04-26 14:09:16.483205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.933 [2024-04-26 14:09:16.487589] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.933 [2024-04-26 14:09:16.487989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.933 [2024-04-26 14:09:16.488046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.933 [2024-04-26 14:09:16.492395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.933 [2024-04-26 14:09:16.492796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.933 [2024-04-26 14:09:16.492843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.933 [2024-04-26 14:09:16.497230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.933 [2024-04-26 14:09:16.497611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.933 [2024-04-26 14:09:16.497654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.933 [2024-04-26 14:09:16.502044] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.933 [2024-04-26 14:09:16.502435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.933 [2024-04-26 14:09:16.502476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.933 [2024-04-26 14:09:16.507008] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.933 [2024-04-26 14:09:16.507430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.933 [2024-04-26 14:09:16.507475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.933 [2024-04-26 14:09:16.511989] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.933 [2024-04-26 14:09:16.512408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.933 [2024-04-26 14:09:16.512453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.933 [2024-04-26 14:09:16.516802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.933 [2024-04-26 14:09:16.517212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.933 [2024-04-26 14:09:16.517254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.933 [2024-04-26 14:09:16.521663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.933 [2024-04-26 14:09:16.522088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.933 [2024-04-26 14:09:16.522131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.933 [2024-04-26 14:09:16.526622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.933 [2024-04-26 14:09:16.527029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.933 [2024-04-26 14:09:16.527078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.933 [2024-04-26 14:09:16.531654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:36.933 [2024-04-26 14:09:16.532062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.933 [2024-04-26 14:09:16.532112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.933 00:26:36.933 Latency(us) 00:26:36.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.933 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:36.933 nvme0n1 : 2.00 6356.48 794.56 0.00 0.00 2511.96 2145.05 10317.31 00:26:36.933 =================================================================================================================== 00:26:36.933 Total : 6356.48 794.56 0.00 0.00 2511.96 2145.05 10317.31 00:26:36.933 0 00:26:36.933 14:09:16 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:36.933 14:09:16 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:36.933 | .driver_specific 00:26:36.933 | .nvme_error 00:26:36.933 | .status_code 00:26:36.933 | .command_transient_transport_error' 00:26:36.933 14:09:16 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:36.933 14:09:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:37.193 14:09:16 -- host/digest.sh@71 -- # (( 410 > 0 )) 00:26:37.193 14:09:16 -- host/digest.sh@73 -- # killprocess 88463 00:26:37.193 14:09:16 -- common/autotest_common.sh@936 -- # '[' -z 88463 ']' 00:26:37.193 14:09:16 -- common/autotest_common.sh@940 -- # kill -0 88463 00:26:37.193 14:09:16 -- common/autotest_common.sh@941 -- # uname 00:26:37.193 14:09:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:37.193 14:09:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88463 00:26:37.193 killing process with pid 88463 00:26:37.193 Received shutdown signal, test time was about 2.000000 seconds 00:26:37.193 00:26:37.193 Latency(us) 00:26:37.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.193 =================================================================================================================== 00:26:37.193 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:37.193 14:09:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:37.193 14:09:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:37.193 14:09:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88463' 00:26:37.193 14:09:16 -- common/autotest_common.sh@955 -- # kill 88463 00:26:37.193 14:09:16 -- common/autotest_common.sh@960 -- # wait 88463 00:26:38.572 14:09:18 -- host/digest.sh@116 -- # killprocess 88128 00:26:38.572 14:09:18 -- common/autotest_common.sh@936 -- # '[' -z 88128 ']' 00:26:38.572 14:09:18 -- common/autotest_common.sh@940 -- # kill -0 88128 00:26:38.572 14:09:18 -- common/autotest_common.sh@941 -- # uname 00:26:38.572 14:09:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:38.572 14:09:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88128 00:26:38.572 killing process with pid 88128 00:26:38.572 14:09:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:38.572 14:09:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:38.572 14:09:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88128' 00:26:38.572 14:09:18 -- common/autotest_common.sh@955 -- # kill 88128 00:26:38.572 14:09:18 -- common/autotest_common.sh@960 -- # wait 88128 00:26:39.952 00:26:39.952 real 0m22.319s 00:26:39.952 user 0m40.179s 00:26:39.952 sys 0m5.184s 00:26:39.952 14:09:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:39.952 14:09:19 -- common/autotest_common.sh@10 -- # set +x 00:26:39.952 ************************************ 00:26:39.952 END TEST nvmf_digest_error 00:26:39.952 ************************************ 00:26:39.952 14:09:19 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:39.952 14:09:19 -- host/digest.sh@150 -- # nvmftestfini 00:26:39.952 14:09:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:39.952 14:09:19 -- nvmf/common.sh@117 -- # sync 00:26:39.952 14:09:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:39.952 14:09:19 -- nvmf/common.sh@120 -- # set +e 00:26:39.952 14:09:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:39.952 14:09:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:39.952 rmmod nvme_tcp 00:26:39.952 rmmod nvme_fabrics 00:26:39.952 rmmod nvme_keyring 00:26:39.952 14:09:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:39.952 14:09:19 -- nvmf/common.sh@124 -- # set -e 00:26:39.952 14:09:19 -- nvmf/common.sh@125 -- # return 0 00:26:39.952 14:09:19 -- nvmf/common.sh@478 -- # '[' -n 88128 ']' 00:26:39.952 14:09:19 -- nvmf/common.sh@479 -- # killprocess 88128 00:26:39.952 14:09:19 -- common/autotest_common.sh@936 -- # '[' -z 88128 ']' 00:26:39.952 14:09:19 -- common/autotest_common.sh@940 -- # kill -0 88128 00:26:39.952 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (88128) - No such process 00:26:39.952 Process with pid 88128 is not found 00:26:39.952 14:09:19 -- common/autotest_common.sh@963 -- # echo 'Process with pid 88128 is not found' 00:26:39.952 14:09:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:39.952 14:09:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:39.952 14:09:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:39.952 14:09:19 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:39.952 14:09:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:39.952 14:09:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.952 14:09:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:39.952 14:09:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.210 14:09:19 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:40.210 00:26:40.210 real 0m47.320s 00:26:40.210 user 1m23.754s 00:26:40.210 sys 0m10.912s 00:26:40.210 14:09:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:40.210 14:09:19 -- common/autotest_common.sh@10 -- # set +x 00:26:40.210 ************************************ 00:26:40.210 END TEST nvmf_digest 00:26:40.210 ************************************ 00:26:40.210 14:09:19 -- nvmf/nvmf.sh@108 -- # [[ 1 -eq 1 ]] 00:26:40.210 14:09:19 -- nvmf/nvmf.sh@108 -- # [[ tcp == \t\c\p ]] 00:26:40.210 14:09:19 -- nvmf/nvmf.sh@110 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:26:40.210 14:09:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:40.210 14:09:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:40.210 14:09:19 -- common/autotest_common.sh@10 -- # set +x 00:26:40.210 ************************************ 00:26:40.210 START TEST nvmf_mdns_discovery 00:26:40.210 ************************************ 00:26:40.210 14:09:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:26:40.470 * Looking for test storage... 00:26:40.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:40.470 14:09:19 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:40.470 14:09:19 -- nvmf/common.sh@7 -- # uname -s 00:26:40.470 14:09:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:40.470 14:09:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:40.470 14:09:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:40.470 14:09:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:40.470 14:09:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:40.470 14:09:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:40.470 14:09:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:40.470 14:09:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:40.470 14:09:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:40.470 14:09:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:40.470 14:09:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:26:40.470 14:09:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:26:40.470 14:09:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:40.470 14:09:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:40.470 14:09:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:40.470 14:09:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:40.470 14:09:19 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:40.470 14:09:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:40.470 14:09:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:40.470 14:09:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:40.470 14:09:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.470 14:09:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.470 14:09:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.470 14:09:19 -- paths/export.sh@5 -- # export PATH 00:26:40.470 14:09:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:40.470 14:09:19 -- nvmf/common.sh@47 -- # : 0 00:26:40.470 14:09:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:40.470 14:09:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:40.470 14:09:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:40.470 14:09:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:40.470 14:09:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:40.470 14:09:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:40.470 14:09:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:40.470 14:09:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:40.470 14:09:19 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:26:40.470 14:09:19 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:26:40.470 14:09:19 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:40.470 14:09:19 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:40.470 14:09:19 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:26:40.470 14:09:19 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:40.470 14:09:19 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:26:40.470 14:09:19 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:26:40.470 14:09:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:40.470 14:09:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.470 14:09:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:40.470 14:09:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:40.470 14:09:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:40.470 14:09:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.470 14:09:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.470 14:09:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.470 14:09:20 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:40.470 14:09:20 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:40.470 14:09:20 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:40.470 14:09:20 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:40.470 14:09:20 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:40.470 14:09:20 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:40.470 14:09:20 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.470 14:09:20 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.470 14:09:20 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:40.470 14:09:20 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:40.470 14:09:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:40.470 14:09:20 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:40.470 14:09:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:40.470 14:09:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.470 14:09:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:40.470 14:09:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:40.470 14:09:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:40.470 14:09:20 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:40.470 14:09:20 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:40.470 14:09:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:40.470 Cannot find device "nvmf_tgt_br" 00:26:40.470 14:09:20 -- nvmf/common.sh@155 -- # true 00:26:40.470 14:09:20 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:40.470 Cannot find device "nvmf_tgt_br2" 00:26:40.470 14:09:20 -- nvmf/common.sh@156 -- # true 00:26:40.470 14:09:20 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:40.470 14:09:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:40.470 Cannot find device "nvmf_tgt_br" 00:26:40.470 14:09:20 -- nvmf/common.sh@158 -- # true 00:26:40.470 14:09:20 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:40.470 Cannot find device "nvmf_tgt_br2" 00:26:40.470 14:09:20 -- nvmf/common.sh@159 -- # true 00:26:40.470 14:09:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:40.729 14:09:20 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:40.730 14:09:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:40.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:40.730 14:09:20 -- nvmf/common.sh@162 -- # true 00:26:40.730 14:09:20 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:40.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:40.730 14:09:20 -- nvmf/common.sh@163 -- # true 00:26:40.730 14:09:20 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:40.730 14:09:20 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:40.730 14:09:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:40.730 14:09:20 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:40.730 14:09:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:40.730 14:09:20 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:40.730 14:09:20 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:40.730 14:09:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:40.730 14:09:20 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:40.730 14:09:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:40.730 14:09:20 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:40.730 14:09:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:40.730 14:09:20 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:40.730 14:09:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:40.730 14:09:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:40.730 14:09:20 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:40.730 14:09:20 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:40.730 14:09:20 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:40.730 14:09:20 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:40.730 14:09:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:40.990 14:09:20 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:40.990 14:09:20 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:40.990 14:09:20 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:40.990 14:09:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:40.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:26:40.990 00:26:40.990 --- 10.0.0.2 ping statistics --- 00:26:40.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.990 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:26:40.990 14:09:20 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:40.990 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:40.990 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:26:40.990 00:26:40.990 --- 10.0.0.3 ping statistics --- 00:26:40.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.990 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:26:40.990 14:09:20 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:40.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:26:40.990 00:26:40.990 --- 10.0.0.1 ping statistics --- 00:26:40.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.990 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:26:40.990 14:09:20 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.990 14:09:20 -- nvmf/common.sh@422 -- # return 0 00:26:40.990 14:09:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:40.990 14:09:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.990 14:09:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:40.990 14:09:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:40.990 14:09:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.990 14:09:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:40.990 14:09:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:40.990 14:09:20 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:26:40.990 14:09:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:40.990 14:09:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:40.990 14:09:20 -- common/autotest_common.sh@10 -- # set +x 00:26:40.990 14:09:20 -- nvmf/common.sh@470 -- # nvmfpid=88781 00:26:40.990 14:09:20 -- nvmf/common.sh@471 -- # waitforlisten 88781 00:26:40.990 14:09:20 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:26:40.990 14:09:20 -- common/autotest_common.sh@817 -- # '[' -z 88781 ']' 00:26:40.990 14:09:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.990 14:09:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:40.990 14:09:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.990 14:09:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:40.990 14:09:20 -- common/autotest_common.sh@10 -- # set +x 00:26:40.990 [2024-04-26 14:09:20.590439] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:40.990 [2024-04-26 14:09:20.590567] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.249 [2024-04-26 14:09:20.763353] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.508 [2024-04-26 14:09:20.997237] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.508 [2024-04-26 14:09:20.997299] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.508 [2024-04-26 14:09:20.997318] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.508 [2024-04-26 14:09:20.997343] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.508 [2024-04-26 14:09:20.997358] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.508 [2024-04-26 14:09:20.997400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.766 14:09:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:41.766 14:09:21 -- common/autotest_common.sh@850 -- # return 0 00:26:41.766 14:09:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:41.766 14:09:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:41.766 14:09:21 -- common/autotest_common.sh@10 -- # set +x 00:26:41.766 14:09:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.766 14:09:21 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:26:41.766 14:09:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.766 14:09:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.025 14:09:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.025 14:09:21 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:26:42.025 14:09:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.025 14:09:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.284 14:09:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.284 14:09:21 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:42.284 14:09:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.284 14:09:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.284 [2024-04-26 14:09:21.845316] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.284 14:09:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.284 14:09:21 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:42.284 14:09:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.284 14:09:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.284 [2024-04-26 14:09:21.857485] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:42.284 14:09:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.284 14:09:21 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:42.284 14:09:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.284 14:09:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.284 null0 00:26:42.284 14:09:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.284 14:09:21 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:42.284 14:09:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.284 14:09:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.284 null1 00:26:42.284 14:09:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.284 14:09:21 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:26:42.284 14:09:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.284 14:09:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.284 null2 00:26:42.284 14:09:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.284 14:09:21 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:26:42.284 14:09:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.284 14:09:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.284 null3 00:26:42.284 14:09:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.284 14:09:21 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:26:42.284 14:09:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.284 14:09:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.284 14:09:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.284 14:09:21 -- host/mdns_discovery.sh@47 -- # hostpid=88834 00:26:42.284 14:09:21 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:42.284 14:09:21 -- host/mdns_discovery.sh@48 -- # waitforlisten 88834 /tmp/host.sock 00:26:42.284 14:09:21 -- common/autotest_common.sh@817 -- # '[' -z 88834 ']' 00:26:42.284 14:09:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:26:42.284 14:09:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:42.284 14:09:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:42.284 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:42.284 14:09:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:42.284 14:09:21 -- common/autotest_common.sh@10 -- # set +x 00:26:42.544 [2024-04-26 14:09:22.011914] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:26:42.544 [2024-04-26 14:09:22.012030] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88834 ] 00:26:42.544 [2024-04-26 14:09:22.182342] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.803 [2024-04-26 14:09:22.417370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.370 14:09:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:43.370 14:09:22 -- common/autotest_common.sh@850 -- # return 0 00:26:43.370 14:09:22 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:26:43.370 14:09:22 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:26:43.370 14:09:22 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:26:43.370 14:09:22 -- host/mdns_discovery.sh@57 -- # avahipid=88863 00:26:43.370 14:09:22 -- host/mdns_discovery.sh@58 -- # sleep 1 00:26:43.370 14:09:22 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:26:43.370 14:09:22 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:26:43.370 Process 1011 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:26:43.370 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:26:43.370 Successfully dropped root privileges. 00:26:43.370 avahi-daemon 0.8 starting up. 00:26:43.370 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:26:43.370 Successfully called chroot(). 00:26:43.370 Successfully dropped remaining capabilities. 00:26:43.370 No service file found in /etc/avahi/services. 00:26:43.370 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:26:43.370 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:26:43.370 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:26:43.370 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:26:43.370 Network interface enumeration completed. 00:26:43.370 Registering new address record for fe80::b861:3dff:fef2:9f8a on nvmf_tgt_if2.*. 00:26:43.370 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:26:43.370 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:26:43.370 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:26:44.305 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 4187831056. 00:26:44.305 14:09:23 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:44.305 14:09:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.305 14:09:23 -- common/autotest_common.sh@10 -- # set +x 00:26:44.305 14:09:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.305 14:09:23 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:26:44.305 14:09:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.305 14:09:23 -- common/autotest_common.sh@10 -- # set +x 00:26:44.305 14:09:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.305 14:09:23 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:26:44.305 14:09:23 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:26:44.305 14:09:23 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:44.305 14:09:23 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:26:44.305 14:09:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.305 14:09:23 -- common/autotest_common.sh@10 -- # set +x 00:26:44.305 14:09:23 -- host/mdns_discovery.sh@68 -- # sort 00:26:44.305 14:09:23 -- host/mdns_discovery.sh@68 -- # xargs 00:26:44.564 14:09:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.564 14:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.564 14:09:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@64 -- # sort 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@64 -- # xargs 00:26:44.564 14:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:44.564 14:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.564 14:09:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.564 14:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:44.564 14:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.564 14:09:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@68 -- # sort 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@68 -- # xargs 00:26:44.564 14:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.564 14:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.564 14:09:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@64 -- # sort 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@64 -- # xargs 00:26:44.564 14:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:44.564 14:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.564 14:09:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.564 14:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:44.564 14:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.564 14:09:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@68 -- # sort 00:26:44.564 14:09:24 -- host/mdns_discovery.sh@68 -- # xargs 00:26:44.564 14:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.822 14:09:24 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:26:44.822 14:09:24 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:26:44.822 14:09:24 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.822 14:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.822 14:09:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.822 14:09:24 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:26:44.822 14:09:24 -- host/mdns_discovery.sh@64 -- # sort 00:26:44.822 14:09:24 -- host/mdns_discovery.sh@64 -- # xargs 00:26:44.822 14:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.822 [2024-04-26 14:09:24.264572] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:26:44.822 14:09:24 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:26:44.822 14:09:24 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:44.822 14:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.822 14:09:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.822 [2024-04-26 14:09:24.299721] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.822 14:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.822 14:09:24 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:44.822 14:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.822 14:09:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.822 14:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.822 14:09:24 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:26:44.822 14:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.822 14:09:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.822 14:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.822 14:09:24 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:26:44.822 14:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.822 14:09:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.822 14:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.822 14:09:24 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:26:44.822 14:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.822 14:09:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.822 14:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.822 14:09:24 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:26:44.822 14:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.822 14:09:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.822 [2024-04-26 14:09:24.359612] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:26:44.822 14:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.822 14:09:24 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:26:44.822 14:09:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:44.822 14:09:24 -- common/autotest_common.sh@10 -- # set +x 00:26:44.822 [2024-04-26 14:09:24.371534] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:44.822 14:09:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:44.822 14:09:24 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=88914 00:26:44.822 14:09:24 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:26:44.822 14:09:24 -- host/mdns_discovery.sh@125 -- # sleep 5 00:26:45.803 Established under name 'CDC' 00:26:45.803 [2024-04-26 14:09:25.163111] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:26:46.061 [2024-04-26 14:09:25.562513] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:26:46.061 [2024-04-26 14:09:25.562571] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:26:46.061 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:26:46.061 cookie is 0 00:26:46.061 is_local: 1 00:26:46.061 our_own: 0 00:26:46.061 wide_area: 0 00:26:46.061 multicast: 1 00:26:46.061 cached: 1 00:26:46.061 [2024-04-26 14:09:25.662331] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:26:46.061 [2024-04-26 14:09:25.662372] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:26:46.061 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:26:46.061 cookie is 0 00:26:46.061 is_local: 1 00:26:46.061 our_own: 0 00:26:46.061 wide_area: 0 00:26:46.061 multicast: 1 00:26:46.061 cached: 1 00:26:46.996 [2024-04-26 14:09:26.574712] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:26:46.996 [2024-04-26 14:09:26.574761] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:26:46.996 [2024-04-26 14:09:26.574792] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:46.996 [2024-04-26 14:09:26.660744] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:26:47.254 [2024-04-26 14:09:26.675280] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:47.254 [2024-04-26 14:09:26.675312] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:47.254 [2024-04-26 14:09:26.675339] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:47.254 [2024-04-26 14:09:26.729845] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:26:47.254 [2024-04-26 14:09:26.729885] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:26:47.254 [2024-04-26 14:09:26.762288] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:26:47.254 [2024-04-26 14:09:26.825674] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:26:47.254 [2024-04-26 14:09:26.825713] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:49.785 14:09:29 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:26:49.785 14:09:29 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:26:49.785 14:09:29 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:26:49.785 14:09:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:49.785 14:09:29 -- host/mdns_discovery.sh@80 -- # sort 00:26:49.785 14:09:29 -- common/autotest_common.sh@10 -- # set +x 00:26:49.785 14:09:29 -- host/mdns_discovery.sh@80 -- # xargs 00:26:49.785 14:09:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:49.785 14:09:29 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:26:49.785 14:09:29 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:26:49.785 14:09:29 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:49.785 14:09:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:49.785 14:09:29 -- common/autotest_common.sh@10 -- # set +x 00:26:49.785 14:09:29 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:26:49.785 14:09:29 -- host/mdns_discovery.sh@76 -- # sort 00:26:49.785 14:09:29 -- host/mdns_discovery.sh@76 -- # xargs 00:26:49.785 14:09:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:50.091 14:09:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@68 -- # sort 00:26:50.091 14:09:29 -- common/autotest_common.sh@10 -- # set +x 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@68 -- # xargs 00:26:50.091 14:09:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.091 14:09:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.091 14:09:29 -- common/autotest_common.sh@10 -- # set +x 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@64 -- # sort 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@64 -- # xargs 00:26:50.091 14:09:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:26:50.091 14:09:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.091 14:09:29 -- common/autotest_common.sh@10 -- # set +x 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@72 -- # sort -n 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@72 -- # xargs 00:26:50.091 14:09:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@72 -- # sort -n 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:50.091 14:09:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.091 14:09:29 -- common/autotest_common.sh@10 -- # set +x 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@72 -- # xargs 00:26:50.091 14:09:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:50.091 14:09:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:26:50.091 14:09:29 -- common/autotest_common.sh@10 -- # set +x 00:26:50.091 14:09:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:50.091 14:09:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.091 14:09:29 -- common/autotest_common.sh@10 -- # set +x 00:26:50.091 14:09:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:26:50.091 14:09:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:50.091 14:09:29 -- common/autotest_common.sh@10 -- # set +x 00:26:50.091 14:09:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:50.091 14:09:29 -- host/mdns_discovery.sh@139 -- # sleep 1 00:26:51.465 14:09:30 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:26:51.465 14:09:30 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.465 14:09:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.465 14:09:30 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:26:51.465 14:09:30 -- common/autotest_common.sh@10 -- # set +x 00:26:51.465 14:09:30 -- host/mdns_discovery.sh@64 -- # sort 00:26:51.465 14:09:30 -- host/mdns_discovery.sh@64 -- # xargs 00:26:51.465 14:09:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.465 14:09:30 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:26:51.465 14:09:30 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:26:51.465 14:09:30 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:51.465 14:09:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.465 14:09:30 -- common/autotest_common.sh@10 -- # set +x 00:26:51.465 14:09:30 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:26:51.465 14:09:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.465 14:09:30 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:26:51.465 14:09:30 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:26:51.465 14:09:30 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:26:51.465 14:09:30 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:51.465 14:09:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.465 14:09:30 -- common/autotest_common.sh@10 -- # set +x 00:26:51.465 [2024-04-26 14:09:30.870259] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:51.465 [2024-04-26 14:09:30.870744] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:51.465 [2024-04-26 14:09:30.870939] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:51.465 [2024-04-26 14:09:30.871105] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:51.465 [2024-04-26 14:09:30.871225] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:51.465 14:09:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.465 14:09:30 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:26:51.465 14:09:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:51.465 14:09:30 -- common/autotest_common.sh@10 -- # set +x 00:26:51.465 [2024-04-26 14:09:30.878239] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:51.465 [2024-04-26 14:09:30.878811] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:51.465 [2024-04-26 14:09:30.878893] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:51.465 14:09:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:51.465 14:09:30 -- host/mdns_discovery.sh@149 -- # sleep 1 00:26:51.465 [2024-04-26 14:09:31.008666] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:26:51.465 [2024-04-26 14:09:31.009651] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:26:51.465 [2024-04-26 14:09:31.071957] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:26:51.465 [2024-04-26 14:09:31.072001] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:26:51.465 [2024-04-26 14:09:31.072013] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:26:51.465 [2024-04-26 14:09:31.072041] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:51.465 [2024-04-26 14:09:31.072108] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:26:51.465 [2024-04-26 14:09:31.072121] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:51.465 [2024-04-26 14:09:31.072172] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:51.465 [2024-04-26 14:09:31.072193] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:51.465 [2024-04-26 14:09:31.117612] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:51.465 [2024-04-26 14:09:31.117646] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:51.465 [2024-04-26 14:09:31.117709] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:26:51.465 [2024-04-26 14:09:31.117720] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:26:52.402 14:09:31 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:26:52.402 14:09:31 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:52.402 14:09:31 -- host/mdns_discovery.sh@68 -- # sort 00:26:52.402 14:09:31 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:26:52.402 14:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.402 14:09:31 -- host/mdns_discovery.sh@68 -- # xargs 00:26:52.402 14:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:52.402 14:09:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.402 14:09:31 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:26:52.402 14:09:31 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:26:52.402 14:09:31 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.402 14:09:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.402 14:09:31 -- common/autotest_common.sh@10 -- # set +x 00:26:52.402 14:09:31 -- host/mdns_discovery.sh@64 -- # sort 00:26:52.402 14:09:31 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:26:52.402 14:09:31 -- host/mdns_discovery.sh@64 -- # xargs 00:26:52.402 14:09:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.402 14:09:32 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:26:52.402 14:09:32 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:26:52.402 14:09:32 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:26:52.402 14:09:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.402 14:09:32 -- common/autotest_common.sh@10 -- # set +x 00:26:52.402 14:09:32 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:52.402 14:09:32 -- host/mdns_discovery.sh@72 -- # sort -n 00:26:52.402 14:09:32 -- host/mdns_discovery.sh@72 -- # xargs 00:26:52.402 14:09:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.664 14:09:32 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:52.664 14:09:32 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:26:52.664 14:09:32 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:26:52.664 14:09:32 -- host/mdns_discovery.sh@72 -- # xargs 00:26:52.664 14:09:32 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:52.664 14:09:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.664 14:09:32 -- common/autotest_common.sh@10 -- # set +x 00:26:52.664 14:09:32 -- host/mdns_discovery.sh@72 -- # sort -n 00:26:52.664 14:09:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.664 14:09:32 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:52.664 14:09:32 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:26:52.664 14:09:32 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:26:52.664 14:09:32 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:26:52.664 14:09:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.664 14:09:32 -- common/autotest_common.sh@10 -- # set +x 00:26:52.664 14:09:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.664 14:09:32 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:26:52.664 14:09:32 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:26:52.664 14:09:32 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:26:52.664 14:09:32 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:52.664 14:09:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.664 14:09:32 -- common/autotest_common.sh@10 -- # set +x 00:26:52.664 [2024-04-26 14:09:32.186532] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:52.664 [2024-04-26 14:09:32.186733] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:52.664 [2024-04-26 14:09:32.186887] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:52.664 [2024-04-26 14:09:32.187078] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:52.664 [2024-04-26 14:09:32.187211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.664 [2024-04-26 14:09:32.187377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.664 [2024-04-26 14:09:32.187396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.664 [2024-04-26 14:09:32.187409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.664 [2024-04-26 14:09:32.187422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.664 [2024-04-26 14:09:32.187434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.664 [2024-04-26 14:09:32.187446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.664 [2024-04-26 14:09:32.187458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.664 [2024-04-26 14:09:32.187470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:26:52.664 14:09:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.664 14:09:32 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:26:52.664 14:09:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.664 14:09:32 -- common/autotest_common.sh@10 -- # set +x 00:26:52.664 [2024-04-26 14:09:32.197141] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:26:52.664 [2024-04-26 14:09:32.198579] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:52.664 [2024-04-26 14:09:32.198784] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:26:52.664 [2024-04-26 14:09:32.201193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.664 [2024-04-26 14:09:32.201388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.664 [2024-04-26 14:09:32.201569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.665 [2024-04-26 14:09:32.201706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.665 [2024-04-26 14:09:32.201784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.665 [2024-04-26 14:09:32.201801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.665 [2024-04-26 14:09:32.201826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.665 [2024-04-26 14:09:32.201839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.665 [2024-04-26 14:09:32.201852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:26:52.665 14:09:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.665 14:09:32 -- host/mdns_discovery.sh@162 -- # sleep 1 00:26:52.665 [2024-04-26 14:09:32.207147] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:52.665 [2024-04-26 14:09:32.207295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.665 [2024-04-26 14:09:32.207343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.665 [2024-04-26 14:09:32.207365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:26:52.665 [2024-04-26 14:09:32.207380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:26:52.665 [2024-04-26 14:09:32.207401] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:26:52.665 [2024-04-26 14:09:32.207420] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:52.665 [2024-04-26 14:09:32.207432] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:52.665 [2024-04-26 14:09:32.207447] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:52.665 [2024-04-26 14:09:32.207476] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.665 [2024-04-26 14:09:32.211123] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:26:52.665 [2024-04-26 14:09:32.217214] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:52.665 [2024-04-26 14:09:32.217309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.665 [2024-04-26 14:09:32.217357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.665 [2024-04-26 14:09:32.217374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:26:52.665 [2024-04-26 14:09:32.217386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:26:52.665 [2024-04-26 14:09:32.217404] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:26:52.665 [2024-04-26 14:09:32.217420] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:52.665 [2024-04-26 14:09:32.217431] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:52.665 [2024-04-26 14:09:32.217443] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:52.665 [2024-04-26 14:09:32.217460] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.665 [2024-04-26 14:09:32.221146] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:52.665 [2024-04-26 14:09:32.221269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.665 [2024-04-26 14:09:32.221322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.665 [2024-04-26 14:09:32.221343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:26:52.665 [2024-04-26 14:09:32.221362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:26:52.665 [2024-04-26 14:09:32.221388] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:26:52.665 [2024-04-26 14:09:32.221411] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:52.665 [2024-04-26 14:09:32.221428] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:52.665 [2024-04-26 14:09:32.221445] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:52.665 [2024-04-26 14:09:32.221469] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.665 [2024-04-26 14:09:32.227258] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:52.665 [2024-04-26 14:09:32.227358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.665 [2024-04-26 14:09:32.227400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.665 [2024-04-26 14:09:32.227415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:26:52.665 [2024-04-26 14:09:32.227428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:26:52.665 [2024-04-26 14:09:32.227446] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:26:52.665 [2024-04-26 14:09:32.227462] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:52.665 [2024-04-26 14:09:32.227473] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:52.665 [2024-04-26 14:09:32.227485] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:52.665 [2024-04-26 14:09:32.227501] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.665 [2024-04-26 14:09:32.231209] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:52.665 [2024-04-26 14:09:32.231294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.665 [2024-04-26 14:09:32.231334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.665 [2024-04-26 14:09:32.231349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:26:52.665 [2024-04-26 14:09:32.231361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:26:52.665 [2024-04-26 14:09:32.231379] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:26:52.665 [2024-04-26 14:09:32.231394] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:52.665 [2024-04-26 14:09:32.231405] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:52.665 [2024-04-26 14:09:32.231416] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:52.665 [2024-04-26 14:09:32.231432] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.665 [2024-04-26 14:09:32.237313] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:52.665 [2024-04-26 14:09:32.237397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.665 [2024-04-26 14:09:32.237437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.665 [2024-04-26 14:09:32.237452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:26:52.665 [2024-04-26 14:09:32.237464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:26:52.665 [2024-04-26 14:09:32.237481] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:26:52.665 [2024-04-26 14:09:32.237497] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:52.665 [2024-04-26 14:09:32.237507] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:52.665 [2024-04-26 14:09:32.237519] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:52.665 [2024-04-26 14:09:32.237535] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.665 [2024-04-26 14:09:32.241251] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:52.665 [2024-04-26 14:09:32.241367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.665 [2024-04-26 14:09:32.241408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.665 [2024-04-26 14:09:32.241423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:26:52.665 [2024-04-26 14:09:32.241435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:26:52.665 [2024-04-26 14:09:32.241452] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:26:52.665 [2024-04-26 14:09:32.241478] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:52.666 [2024-04-26 14:09:32.241488] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:52.666 [2024-04-26 14:09:32.241500] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:52.666 [2024-04-26 14:09:32.241516] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.666 [2024-04-26 14:09:32.247356] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:52.666 [2024-04-26 14:09:32.247446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.666 [2024-04-26 14:09:32.247486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.666 [2024-04-26 14:09:32.247501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:26:52.666 [2024-04-26 14:09:32.247514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:26:52.666 [2024-04-26 14:09:32.247531] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:26:52.666 [2024-04-26 14:09:32.247547] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:52.666 [2024-04-26 14:09:32.247557] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:52.666 [2024-04-26 14:09:32.247569] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:52.666 [2024-04-26 14:09:32.247585] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.666 [2024-04-26 14:09:32.251321] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:52.666 [2024-04-26 14:09:32.251415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.666 [2024-04-26 14:09:32.251456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.666 [2024-04-26 14:09:32.251471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:26:52.666 [2024-04-26 14:09:32.251483] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:26:52.666 [2024-04-26 14:09:32.251501] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:26:52.666 [2024-04-26 14:09:32.251516] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:52.666 [2024-04-26 14:09:32.251527] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:52.666 [2024-04-26 14:09:32.251538] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:52.666 [2024-04-26 14:09:32.251554] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.666 [2024-04-26 14:09:32.257407] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:52.666 [2024-04-26 14:09:32.257504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.666 [2024-04-26 14:09:32.257548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.666 [2024-04-26 14:09:32.257565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:26:52.666 [2024-04-26 14:09:32.257579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:26:52.666 [2024-04-26 14:09:32.257599] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:26:52.666 [2024-04-26 14:09:32.257615] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:52.666 [2024-04-26 14:09:32.257627] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:52.666 [2024-04-26 14:09:32.257640] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:52.666 [2024-04-26 14:09:32.257658] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.666 [2024-04-26 14:09:32.261364] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:52.666 [2024-04-26 14:09:32.261457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.666 [2024-04-26 14:09:32.261500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.666 [2024-04-26 14:09:32.261516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:26:52.666 [2024-04-26 14:09:32.261530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:26:52.666 [2024-04-26 14:09:32.261549] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:26:52.666 [2024-04-26 14:09:32.261566] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:52.666 [2024-04-26 14:09:32.261577] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:52.666 [2024-04-26 14:09:32.261589] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:52.666 [2024-04-26 14:09:32.261607] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.666 [2024-04-26 14:09:32.267457] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:52.666 [2024-04-26 14:09:32.267556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.666 [2024-04-26 14:09:32.267598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.666 [2024-04-26 14:09:32.267613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:26:52.666 [2024-04-26 14:09:32.267626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:26:52.666 [2024-04-26 14:09:32.267645] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:26:52.666 [2024-04-26 14:09:32.267660] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:52.666 [2024-04-26 14:09:32.267671] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:52.666 [2024-04-26 14:09:32.267683] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:52.666 [2024-04-26 14:09:32.267700] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.666 [2024-04-26 14:09:32.271410] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:52.666 [2024-04-26 14:09:32.271495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.666 [2024-04-26 14:09:32.271534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.666 [2024-04-26 14:09:32.271548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:26:52.666 [2024-04-26 14:09:32.271560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:26:52.666 [2024-04-26 14:09:32.271577] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:26:52.666 [2024-04-26 14:09:32.271593] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:52.666 [2024-04-26 14:09:32.271603] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:52.666 [2024-04-26 14:09:32.271614] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:52.666 [2024-04-26 14:09:32.271630] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.666 [2024-04-26 14:09:32.277509] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:52.666 [2024-04-26 14:09:32.277593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.666 [2024-04-26 14:09:32.277633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.666 [2024-04-26 14:09:32.277648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:26:52.666 [2024-04-26 14:09:32.277661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:26:52.666 [2024-04-26 14:09:32.277678] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:26:52.666 [2024-04-26 14:09:32.277693] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:52.666 [2024-04-26 14:09:32.277704] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:52.666 [2024-04-26 14:09:32.277715] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:52.666 [2024-04-26 14:09:32.277731] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.666 [2024-04-26 14:09:32.281454] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:52.666 [2024-04-26 14:09:32.281555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.667 [2024-04-26 14:09:32.281599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.667 [2024-04-26 14:09:32.281615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:26:52.667 [2024-04-26 14:09:32.281628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:26:52.667 [2024-04-26 14:09:32.281647] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:26:52.667 [2024-04-26 14:09:32.281664] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:52.667 [2024-04-26 14:09:32.281675] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:52.667 [2024-04-26 14:09:32.281687] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:52.667 [2024-04-26 14:09:32.281704] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.667 [2024-04-26 14:09:32.287554] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:52.667 [2024-04-26 14:09:32.287645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.667 [2024-04-26 14:09:32.287685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.667 [2024-04-26 14:09:32.287700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:26:52.667 [2024-04-26 14:09:32.287713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:26:52.667 [2024-04-26 14:09:32.287730] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:26:52.667 [2024-04-26 14:09:32.287746] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:52.667 [2024-04-26 14:09:32.287757] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:52.667 [2024-04-26 14:09:32.287768] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:52.667 [2024-04-26 14:09:32.287789] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.667 [2024-04-26 14:09:32.291510] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:52.667 [2024-04-26 14:09:32.291614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.667 [2024-04-26 14:09:32.291656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.667 [2024-04-26 14:09:32.291671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:26:52.667 [2024-04-26 14:09:32.291684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:26:52.667 [2024-04-26 14:09:32.291702] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:26:52.667 [2024-04-26 14:09:32.291717] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:52.667 [2024-04-26 14:09:32.291728] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:52.667 [2024-04-26 14:09:32.291739] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:52.667 [2024-04-26 14:09:32.291755] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.667 [2024-04-26 14:09:32.297601] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:52.667 [2024-04-26 14:09:32.297686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.667 [2024-04-26 14:09:32.297726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.667 [2024-04-26 14:09:32.297741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:26:52.667 [2024-04-26 14:09:32.297753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:26:52.667 [2024-04-26 14:09:32.297770] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:26:52.667 [2024-04-26 14:09:32.297786] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:52.667 [2024-04-26 14:09:32.297796] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:52.667 [2024-04-26 14:09:32.297807] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:52.667 [2024-04-26 14:09:32.297831] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.667 [2024-04-26 14:09:32.301564] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:52.667 [2024-04-26 14:09:32.301687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.667 [2024-04-26 14:09:32.301763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.667 [2024-04-26 14:09:32.301801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:26:52.667 [2024-04-26 14:09:32.301837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:26:52.667 [2024-04-26 14:09:32.301872] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:26:52.667 [2024-04-26 14:09:32.301935] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:52.667 [2024-04-26 14:09:32.301954] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:52.667 [2024-04-26 14:09:32.301975] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:52.667 [2024-04-26 14:09:32.302005] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.667 [2024-04-26 14:09:32.307658] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:52.667 [2024-04-26 14:09:32.307796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.667 [2024-04-26 14:09:32.307852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.667 [2024-04-26 14:09:32.307874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:26:52.667 [2024-04-26 14:09:32.307909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:26:52.667 [2024-04-26 14:09:32.307936] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:26:52.667 [2024-04-26 14:09:32.307983] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:52.667 [2024-04-26 14:09:32.308001] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:52.667 [2024-04-26 14:09:32.308019] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:52.667 [2024-04-26 14:09:32.308044] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.667 [2024-04-26 14:09:32.311623] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:52.667 [2024-04-26 14:09:32.311713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.667 [2024-04-26 14:09:32.311755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.667 [2024-04-26 14:09:32.311770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:26:52.667 [2024-04-26 14:09:32.311783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:26:52.667 [2024-04-26 14:09:32.311801] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:26:52.667 [2024-04-26 14:09:32.311816] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:52.667 [2024-04-26 14:09:32.311827] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:52.667 [2024-04-26 14:09:32.311838] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:52.667 [2024-04-26 14:09:32.311854] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.667 [2024-04-26 14:09:32.317733] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:52.667 [2024-04-26 14:09:32.317825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.667 [2024-04-26 14:09:32.317866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.667 [2024-04-26 14:09:32.317881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:26:52.667 [2024-04-26 14:09:32.317893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:26:52.667 [2024-04-26 14:09:32.317911] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:26:52.667 [2024-04-26 14:09:32.317936] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:52.668 [2024-04-26 14:09:32.317948] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:52.668 [2024-04-26 14:09:32.317959] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:52.668 [2024-04-26 14:09:32.317992] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.668 [2024-04-26 14:09:32.321666] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:26:52.668 [2024-04-26 14:09:32.321762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.668 [2024-04-26 14:09:32.321804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.668 [2024-04-26 14:09:32.321827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005c40 with addr=10.0.0.3, port=4420 00:26:52.668 [2024-04-26 14:09:32.321839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005c40 is same with the state(5) to be set 00:26:52.668 [2024-04-26 14:09:32.321857] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005c40 (9): Bad file descriptor 00:26:52.668 [2024-04-26 14:09:32.321873] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:26:52.668 [2024-04-26 14:09:32.321883] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:26:52.668 [2024-04-26 14:09:32.321894] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:26:52.668 [2024-04-26 14:09:32.321910] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.668 [2024-04-26 14:09:32.327776] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:52.668 [2024-04-26 14:09:32.327862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.668 [2024-04-26 14:09:32.327902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.668 [2024-04-26 14:09:32.327916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000007a40 with addr=10.0.0.2, port=4420 00:26:52.668 [2024-04-26 14:09:32.327929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007a40 is same with the state(5) to be set 00:26:52.668 [2024-04-26 14:09:32.327946] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000007a40 (9): Bad file descriptor 00:26:52.668 [2024-04-26 14:09:32.327979] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:52.668 [2024-04-26 14:09:32.327990] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:52.668 [2024-04-26 14:09:32.328001] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:52.668 [2024-04-26 14:09:32.328018] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.668 [2024-04-26 14:09:32.330021] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:52.668 [2024-04-26 14:09:32.330067] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:52.668 [2024-04-26 14:09:32.330112] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:52.668 [2024-04-26 14:09:32.331025] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:26:52.668 [2024-04-26 14:09:32.331060] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:26:52.668 [2024-04-26 14:09:32.331085] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:52.927 [2024-04-26 14:09:32.415974] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:52.927 [2024-04-26 14:09:32.416966] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:26:53.885 14:09:33 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:26:53.885 14:09:33 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:26:53.885 14:09:33 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:53.885 14:09:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.885 14:09:33 -- common/autotest_common.sh@10 -- # set +x 00:26:53.885 14:09:33 -- host/mdns_discovery.sh@68 -- # sort 00:26:53.885 14:09:33 -- host/mdns_discovery.sh@68 -- # xargs 00:26:53.885 14:09:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.885 14:09:33 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@64 -- # sort 00:26:53.886 14:09:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.886 14:09:33 -- common/autotest_common.sh@10 -- # set +x 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@64 -- # xargs 00:26:53.886 14:09:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:26:53.886 14:09:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.886 14:09:33 -- common/autotest_common.sh@10 -- # set +x 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@72 -- # sort -n 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@72 -- # xargs 00:26:53.886 14:09:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:53.886 14:09:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@72 -- # xargs 00:26:53.886 14:09:33 -- common/autotest_common.sh@10 -- # set +x 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@72 -- # sort -n 00:26:53.886 14:09:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:26:53.886 14:09:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.886 14:09:33 -- common/autotest_common.sh@10 -- # set +x 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:26:53.886 14:09:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:26:53.886 14:09:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.886 14:09:33 -- common/autotest_common.sh@10 -- # set +x 00:26:53.886 14:09:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.886 14:09:33 -- host/mdns_discovery.sh@172 -- # sleep 1 00:26:53.886 [2024-04-26 14:09:33.549949] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:26:55.264 14:09:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.264 14:09:34 -- common/autotest_common.sh@10 -- # set +x 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@80 -- # xargs 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@80 -- # sort 00:26:55.264 14:09:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:26:55.264 14:09:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.264 14:09:34 -- common/autotest_common.sh@10 -- # set +x 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@68 -- # sort 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@68 -- # xargs 00:26:55.264 14:09:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:55.264 14:09:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.264 14:09:34 -- common/autotest_common.sh@10 -- # set +x 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@64 -- # sort 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@64 -- # xargs 00:26:55.264 14:09:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:26:55.264 14:09:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.264 14:09:34 -- common/autotest_common.sh@10 -- # set +x 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:26:55.264 14:09:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:26:55.264 14:09:34 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:26:55.265 14:09:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.265 14:09:34 -- common/autotest_common.sh@10 -- # set +x 00:26:55.265 14:09:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:55.265 14:09:34 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:26:55.265 14:09:34 -- common/autotest_common.sh@638 -- # local es=0 00:26:55.265 14:09:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:26:55.265 14:09:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:55.265 14:09:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:55.265 14:09:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:55.265 14:09:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:55.265 14:09:34 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:26:55.265 14:09:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:55.265 14:09:34 -- common/autotest_common.sh@10 -- # set +x 00:26:55.265 [2024-04-26 14:09:34.733967] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:26:55.265 2024/04/26 14:09:34 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:26:55.265 request: 00:26:55.265 { 00:26:55.265 "method": "bdev_nvme_start_mdns_discovery", 00:26:55.265 "params": { 00:26:55.265 "name": "mdns", 00:26:55.265 "svcname": "_nvme-disc._http", 00:26:55.265 "hostnqn": "nqn.2021-12.io.spdk:test" 00:26:55.265 } 00:26:55.265 } 00:26:55.265 Got JSON-RPC error response 00:26:55.265 GoRPCClient: error on JSON-RPC call 00:26:55.265 14:09:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:55.265 14:09:34 -- common/autotest_common.sh@641 -- # es=1 00:26:55.265 14:09:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:55.265 14:09:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:55.265 14:09:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:55.265 14:09:34 -- host/mdns_discovery.sh@183 -- # sleep 5 00:26:55.521 [2024-04-26 14:09:35.118232] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:26:55.779 [2024-04-26 14:09:35.218068] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:26:55.779 [2024-04-26 14:09:35.317925] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:26:55.779 [2024-04-26 14:09:35.317983] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:26:55.779 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:26:55.779 cookie is 0 00:26:55.779 is_local: 1 00:26:55.779 our_own: 0 00:26:55.779 wide_area: 0 00:26:55.779 multicast: 1 00:26:55.779 cached: 1 00:26:55.779 [2024-04-26 14:09:35.417763] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:26:55.779 [2024-04-26 14:09:35.417848] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:26:55.779 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:26:55.779 cookie is 0 00:26:55.779 is_local: 1 00:26:55.779 our_own: 0 00:26:55.779 wide_area: 0 00:26:55.779 multicast: 1 00:26:55.779 cached: 1 00:26:56.724 [2024-04-26 14:09:36.330814] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:26:56.725 [2024-04-26 14:09:36.330861] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:26:56.725 [2024-04-26 14:09:36.330893] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:26:56.991 [2024-04-26 14:09:36.416820] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:26:56.992 [2024-04-26 14:09:36.431454] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:56.992 [2024-04-26 14:09:36.431508] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:56.992 [2024-04-26 14:09:36.431570] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:56.992 [2024-04-26 14:09:36.489994] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:26:56.992 [2024-04-26 14:09:36.490040] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:26:56.992 [2024-04-26 14:09:36.517475] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:26:56.992 [2024-04-26 14:09:36.586107] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:26:56.992 [2024-04-26 14:09:36.586166] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:27:00.280 14:09:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.280 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@80 -- # sort 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@80 -- # xargs 00:27:00.280 14:09:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@76 -- # xargs 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@76 -- # sort 00:27:00.280 14:09:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.280 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:27:00.280 14:09:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.280 14:09:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.280 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@64 -- # xargs 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@64 -- # sort 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:00.280 14:09:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:00.280 14:09:39 -- common/autotest_common.sh@638 -- # local es=0 00:27:00.280 14:09:39 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:00.280 14:09:39 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:00.280 14:09:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:00.280 14:09:39 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:00.280 14:09:39 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:00.280 14:09:39 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:27:00.280 14:09:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.280 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:27:00.280 [2024-04-26 14:09:39.921306] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:27:00.280 2024/04/26 14:09:39 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:27:00.280 request: 00:27:00.280 { 00:27:00.280 "method": "bdev_nvme_start_mdns_discovery", 00:27:00.280 "params": { 00:27:00.280 "name": "cdc", 00:27:00.280 "svcname": "_nvme-disc._tcp", 00:27:00.280 "hostnqn": "nqn.2021-12.io.spdk:test" 00:27:00.280 } 00:27:00.280 } 00:27:00.280 Got JSON-RPC error response 00:27:00.280 GoRPCClient: error on JSON-RPC call 00:27:00.280 14:09:39 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:00.280 14:09:39 -- common/autotest_common.sh@641 -- # es=1 00:27:00.280 14:09:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:00.280 14:09:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:00.280 14:09:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@76 -- # sort 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:27:00.280 14:09:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.280 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:27:00.280 14:09:39 -- host/mdns_discovery.sh@76 -- # xargs 00:27:00.540 14:09:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.540 14:09:39 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:27:00.540 14:09:39 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:27:00.540 14:09:39 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.540 14:09:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.540 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:27:00.540 14:09:39 -- host/mdns_discovery.sh@64 -- # xargs 00:27:00.540 14:09:39 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:27:00.540 14:09:39 -- host/mdns_discovery.sh@64 -- # sort 00:27:00.540 14:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.540 14:09:40 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:27:00.540 14:09:40 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:27:00.540 14:09:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.540 14:09:40 -- common/autotest_common.sh@10 -- # set +x 00:27:00.540 14:09:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.540 14:09:40 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:27:00.540 14:09:40 -- host/mdns_discovery.sh@197 -- # kill 88834 00:27:00.540 14:09:40 -- host/mdns_discovery.sh@200 -- # wait 88834 00:27:00.798 [2024-04-26 14:09:40.338299] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:27:01.749 14:09:41 -- host/mdns_discovery.sh@201 -- # kill 88914 00:27:01.749 Got SIGTERM, quitting. 00:27:01.749 14:09:41 -- host/mdns_discovery.sh@202 -- # kill 88863 00:27:01.749 Got SIGTERM, quitting. 00:27:01.749 14:09:41 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:27:01.749 14:09:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:01.749 14:09:41 -- nvmf/common.sh@117 -- # sync 00:27:01.749 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:27:01.749 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:27:01.749 avahi-daemon 0.8 exiting. 00:27:01.749 14:09:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:01.749 14:09:41 -- nvmf/common.sh@120 -- # set +e 00:27:01.749 14:09:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:01.749 14:09:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:01.749 rmmod nvme_tcp 00:27:01.749 rmmod nvme_fabrics 00:27:01.749 rmmod nvme_keyring 00:27:01.749 14:09:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:01.749 14:09:41 -- nvmf/common.sh@124 -- # set -e 00:27:01.749 14:09:41 -- nvmf/common.sh@125 -- # return 0 00:27:01.749 14:09:41 -- nvmf/common.sh@478 -- # '[' -n 88781 ']' 00:27:01.749 14:09:41 -- nvmf/common.sh@479 -- # killprocess 88781 00:27:01.749 14:09:41 -- common/autotest_common.sh@936 -- # '[' -z 88781 ']' 00:27:01.749 14:09:41 -- common/autotest_common.sh@940 -- # kill -0 88781 00:27:01.749 14:09:41 -- common/autotest_common.sh@941 -- # uname 00:27:01.749 14:09:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:01.749 14:09:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88781 00:27:01.749 14:09:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:01.749 14:09:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:01.749 killing process with pid 88781 00:27:01.749 14:09:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88781' 00:27:01.749 14:09:41 -- common/autotest_common.sh@955 -- # kill 88781 00:27:01.749 14:09:41 -- common/autotest_common.sh@960 -- # wait 88781 00:27:03.166 14:09:42 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:03.166 14:09:42 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:03.166 14:09:42 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:03.166 14:09:42 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.166 14:09:42 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:03.166 14:09:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.166 14:09:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.166 14:09:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.166 14:09:42 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:03.166 00:27:03.166 real 0m22.875s 00:27:03.166 user 0m42.312s 00:27:03.166 sys 0m2.796s 00:27:03.166 14:09:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:03.166 14:09:42 -- common/autotest_common.sh@10 -- # set +x 00:27:03.166 ************************************ 00:27:03.166 END TEST nvmf_mdns_discovery 00:27:03.166 ************************************ 00:27:03.166 14:09:42 -- nvmf/nvmf.sh@113 -- # [[ 1 -eq 1 ]] 00:27:03.166 14:09:42 -- nvmf/nvmf.sh@114 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:03.166 14:09:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:03.166 14:09:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:03.166 14:09:42 -- common/autotest_common.sh@10 -- # set +x 00:27:03.166 ************************************ 00:27:03.166 START TEST nvmf_multipath 00:27:03.166 ************************************ 00:27:03.166 14:09:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:27:03.425 * Looking for test storage... 00:27:03.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:03.425 14:09:42 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:03.425 14:09:42 -- nvmf/common.sh@7 -- # uname -s 00:27:03.425 14:09:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.425 14:09:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.425 14:09:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.425 14:09:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.425 14:09:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.425 14:09:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.425 14:09:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.425 14:09:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.425 14:09:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.425 14:09:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.425 14:09:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:27:03.425 14:09:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:27:03.425 14:09:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.425 14:09:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.425 14:09:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:03.425 14:09:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.425 14:09:42 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:03.425 14:09:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.425 14:09:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.425 14:09:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.425 14:09:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.425 14:09:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.425 14:09:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.425 14:09:42 -- paths/export.sh@5 -- # export PATH 00:27:03.425 14:09:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.425 14:09:42 -- nvmf/common.sh@47 -- # : 0 00:27:03.425 14:09:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:03.425 14:09:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:03.425 14:09:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.425 14:09:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.425 14:09:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.425 14:09:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:03.425 14:09:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:03.425 14:09:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:03.425 14:09:42 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:03.425 14:09:42 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:03.425 14:09:42 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:03.425 14:09:42 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:03.425 14:09:42 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:03.425 14:09:42 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:03.425 14:09:42 -- host/multipath.sh@30 -- # nvmftestinit 00:27:03.425 14:09:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:03.425 14:09:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.425 14:09:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:03.425 14:09:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:03.425 14:09:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:03.425 14:09:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.425 14:09:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:03.425 14:09:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.425 14:09:42 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:03.425 14:09:42 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:03.425 14:09:42 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:03.425 14:09:42 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:03.425 14:09:42 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:03.425 14:09:42 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:03.425 14:09:42 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:03.425 14:09:42 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:03.425 14:09:42 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:03.425 14:09:42 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:03.425 14:09:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:03.425 14:09:42 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:03.425 14:09:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:03.425 14:09:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:03.425 14:09:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:03.425 14:09:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:03.425 14:09:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:03.425 14:09:42 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:03.425 14:09:42 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:03.426 14:09:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:03.426 Cannot find device "nvmf_tgt_br" 00:27:03.426 14:09:43 -- nvmf/common.sh@155 -- # true 00:27:03.426 14:09:43 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:03.426 Cannot find device "nvmf_tgt_br2" 00:27:03.426 14:09:43 -- nvmf/common.sh@156 -- # true 00:27:03.426 14:09:43 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:03.426 14:09:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:03.426 Cannot find device "nvmf_tgt_br" 00:27:03.426 14:09:43 -- nvmf/common.sh@158 -- # true 00:27:03.426 14:09:43 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:03.426 Cannot find device "nvmf_tgt_br2" 00:27:03.426 14:09:43 -- nvmf/common.sh@159 -- # true 00:27:03.426 14:09:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:03.685 14:09:43 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:03.685 14:09:43 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:03.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:03.685 14:09:43 -- nvmf/common.sh@162 -- # true 00:27:03.686 14:09:43 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:03.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:03.686 14:09:43 -- nvmf/common.sh@163 -- # true 00:27:03.686 14:09:43 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:03.686 14:09:43 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:03.686 14:09:43 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:03.686 14:09:43 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:03.686 14:09:43 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:03.686 14:09:43 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:03.686 14:09:43 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:03.686 14:09:43 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:03.686 14:09:43 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:03.686 14:09:43 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:03.686 14:09:43 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:03.686 14:09:43 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:03.686 14:09:43 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:03.686 14:09:43 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:03.686 14:09:43 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:03.686 14:09:43 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:03.686 14:09:43 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:03.686 14:09:43 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:03.686 14:09:43 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:03.686 14:09:43 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:03.945 14:09:43 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:03.945 14:09:43 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:03.945 14:09:43 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:03.945 14:09:43 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:03.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:03.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:27:03.945 00:27:03.945 --- 10.0.0.2 ping statistics --- 00:27:03.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.945 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:27:03.945 14:09:43 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:03.945 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:03.945 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:27:03.945 00:27:03.945 --- 10.0.0.3 ping statistics --- 00:27:03.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.945 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:27:03.945 14:09:43 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:03.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:27:03.945 00:27:03.945 --- 10.0.0.1 ping statistics --- 00:27:03.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.945 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:27:03.945 14:09:43 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.945 14:09:43 -- nvmf/common.sh@422 -- # return 0 00:27:03.945 14:09:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:03.945 14:09:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:03.945 14:09:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:03.945 14:09:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:03.945 14:09:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:03.945 14:09:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:03.945 14:09:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:03.945 14:09:43 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:27:03.945 14:09:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:03.945 14:09:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:03.945 14:09:43 -- common/autotest_common.sh@10 -- # set +x 00:27:03.945 14:09:43 -- nvmf/common.sh@470 -- # nvmfpid=89457 00:27:03.945 14:09:43 -- nvmf/common.sh@471 -- # waitforlisten 89457 00:27:03.945 14:09:43 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:03.945 14:09:43 -- common/autotest_common.sh@817 -- # '[' -z 89457 ']' 00:27:03.945 14:09:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.945 14:09:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:03.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.945 14:09:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.945 14:09:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:03.945 14:09:43 -- common/autotest_common.sh@10 -- # set +x 00:27:03.945 [2024-04-26 14:09:43.537370] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:27:03.945 [2024-04-26 14:09:43.537491] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.204 [2024-04-26 14:09:43.697799] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:04.463 [2024-04-26 14:09:43.968255] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:04.463 [2024-04-26 14:09:43.968316] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:04.463 [2024-04-26 14:09:43.968332] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:04.463 [2024-04-26 14:09:43.968355] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:04.463 [2024-04-26 14:09:43.968368] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:04.463 [2024-04-26 14:09:43.968593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.463 [2024-04-26 14:09:43.968630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:04.721 14:09:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:04.721 14:09:44 -- common/autotest_common.sh@850 -- # return 0 00:27:04.721 14:09:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:04.721 14:09:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:04.721 14:09:44 -- common/autotest_common.sh@10 -- # set +x 00:27:04.979 14:09:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.979 14:09:44 -- host/multipath.sh@33 -- # nvmfapp_pid=89457 00:27:04.979 14:09:44 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:04.979 [2024-04-26 14:09:44.618213] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.979 14:09:44 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:05.237 Malloc0 00:27:05.496 14:09:44 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:05.496 14:09:45 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:05.755 14:09:45 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:06.014 [2024-04-26 14:09:45.478249] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.014 14:09:45 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:06.014 [2024-04-26 14:09:45.658236] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:06.014 14:09:45 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:06.014 14:09:45 -- host/multipath.sh@44 -- # bdevperf_pid=89556 00:27:06.014 14:09:45 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:06.014 14:09:45 -- host/multipath.sh@47 -- # waitforlisten 89556 /var/tmp/bdevperf.sock 00:27:06.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:06.014 14:09:45 -- common/autotest_common.sh@817 -- # '[' -z 89556 ']' 00:27:06.014 14:09:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:06.014 14:09:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:06.014 14:09:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:06.014 14:09:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:06.014 14:09:45 -- common/autotest_common.sh@10 -- # set +x 00:27:06.951 14:09:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:06.951 14:09:46 -- common/autotest_common.sh@850 -- # return 0 00:27:06.951 14:09:46 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:07.211 14:09:46 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:27:07.470 Nvme0n1 00:27:07.470 14:09:47 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:07.729 Nvme0n1 00:27:07.987 14:09:47 -- host/multipath.sh@78 -- # sleep 1 00:27:07.987 14:09:47 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:08.919 14:09:48 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:27:08.919 14:09:48 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:09.177 14:09:48 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:09.435 14:09:48 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:27:09.435 14:09:48 -- host/multipath.sh@65 -- # dtrace_pid=89642 00:27:09.435 14:09:48 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89457 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:09.435 14:09:48 -- host/multipath.sh@66 -- # sleep 6 00:27:16.021 14:09:54 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:16.021 14:09:54 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:16.021 14:09:55 -- host/multipath.sh@67 -- # active_port=4421 00:27:16.021 14:09:55 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:16.021 Attaching 4 probes... 00:27:16.021 @path[10.0.0.2, 4421]: 15990 00:27:16.021 @path[10.0.0.2, 4421]: 19336 00:27:16.021 @path[10.0.0.2, 4421]: 17517 00:27:16.021 @path[10.0.0.2, 4421]: 16904 00:27:16.021 @path[10.0.0.2, 4421]: 15499 00:27:16.021 14:09:55 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:16.021 14:09:55 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:16.021 14:09:55 -- host/multipath.sh@69 -- # sed -n 1p 00:27:16.021 14:09:55 -- host/multipath.sh@69 -- # port=4421 00:27:16.021 14:09:55 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:16.021 14:09:55 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:16.021 14:09:55 -- host/multipath.sh@72 -- # kill 89642 00:27:16.021 14:09:55 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:16.021 14:09:55 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:27:16.021 14:09:55 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:16.021 14:09:55 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:16.021 14:09:55 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:27:16.021 14:09:55 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89457 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:16.021 14:09:55 -- host/multipath.sh@65 -- # dtrace_pid=89774 00:27:16.021 14:09:55 -- host/multipath.sh@66 -- # sleep 6 00:27:22.664 14:10:01 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:22.664 14:10:01 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:27:22.664 14:10:01 -- host/multipath.sh@67 -- # active_port=4420 00:27:22.664 14:10:01 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:22.664 Attaching 4 probes... 00:27:22.664 @path[10.0.0.2, 4420]: 17470 00:27:22.664 @path[10.0.0.2, 4420]: 18965 00:27:22.664 @path[10.0.0.2, 4420]: 18642 00:27:22.664 @path[10.0.0.2, 4420]: 18828 00:27:22.664 @path[10.0.0.2, 4420]: 18621 00:27:22.664 14:10:01 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:22.664 14:10:01 -- host/multipath.sh@69 -- # sed -n 1p 00:27:22.664 14:10:01 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:22.664 14:10:01 -- host/multipath.sh@69 -- # port=4420 00:27:22.664 14:10:01 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:27:22.664 14:10:01 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:27:22.664 14:10:01 -- host/multipath.sh@72 -- # kill 89774 00:27:22.664 14:10:01 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:22.664 14:10:01 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:27:22.664 14:10:01 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:22.664 14:10:02 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:22.960 14:10:02 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:27:22.960 14:10:02 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89457 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:22.960 14:10:02 -- host/multipath.sh@65 -- # dtrace_pid=89904 00:27:22.960 14:10:02 -- host/multipath.sh@66 -- # sleep 6 00:27:29.537 14:10:08 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:29.537 14:10:08 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:29.537 14:10:08 -- host/multipath.sh@67 -- # active_port=4421 00:27:29.537 14:10:08 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:29.537 Attaching 4 probes... 00:27:29.537 @path[10.0.0.2, 4421]: 14189 00:27:29.537 @path[10.0.0.2, 4421]: 18619 00:27:29.537 @path[10.0.0.2, 4421]: 18629 00:27:29.537 @path[10.0.0.2, 4421]: 18141 00:27:29.537 @path[10.0.0.2, 4421]: 18588 00:27:29.538 14:10:08 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:29.538 14:10:08 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:29.538 14:10:08 -- host/multipath.sh@69 -- # sed -n 1p 00:27:29.538 14:10:08 -- host/multipath.sh@69 -- # port=4421 00:27:29.538 14:10:08 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:29.538 14:10:08 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:29.538 14:10:08 -- host/multipath.sh@72 -- # kill 89904 00:27:29.538 14:10:08 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:29.538 14:10:08 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:27:29.538 14:10:08 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:29.538 14:10:08 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:29.538 14:10:09 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:27:29.538 14:10:09 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89457 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:29.538 14:10:09 -- host/multipath.sh@65 -- # dtrace_pid=90039 00:27:29.538 14:10:09 -- host/multipath.sh@66 -- # sleep 6 00:27:36.100 14:10:15 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:36.100 14:10:15 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:27:36.100 14:10:15 -- host/multipath.sh@67 -- # active_port= 00:27:36.100 14:10:15 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:36.100 Attaching 4 probes... 00:27:36.100 00:27:36.100 00:27:36.100 00:27:36.100 00:27:36.100 00:27:36.100 14:10:15 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:36.100 14:10:15 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:36.100 14:10:15 -- host/multipath.sh@69 -- # sed -n 1p 00:27:36.100 14:10:15 -- host/multipath.sh@69 -- # port= 00:27:36.100 14:10:15 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:27:36.100 14:10:15 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:27:36.100 14:10:15 -- host/multipath.sh@72 -- # kill 90039 00:27:36.100 14:10:15 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:36.100 14:10:15 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:27:36.100 14:10:15 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:36.100 14:10:15 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:36.100 14:10:15 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:27:36.100 14:10:15 -- host/multipath.sh@65 -- # dtrace_pid=90165 00:27:36.100 14:10:15 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89457 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:36.100 14:10:15 -- host/multipath.sh@66 -- # sleep 6 00:27:42.685 14:10:21 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:42.685 14:10:21 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:27:42.685 14:10:21 -- host/multipath.sh@67 -- # active_port=4421 00:27:42.685 14:10:21 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:42.685 Attaching 4 probes... 00:27:42.685 @path[10.0.0.2, 4421]: 18511 00:27:42.685 @path[10.0.0.2, 4421]: 18783 00:27:42.685 @path[10.0.0.2, 4421]: 17610 00:27:42.685 @path[10.0.0.2, 4421]: 16667 00:27:42.685 @path[10.0.0.2, 4421]: 15620 00:27:42.685 14:10:21 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:42.685 14:10:21 -- host/multipath.sh@69 -- # sed -n 1p 00:27:42.685 14:10:21 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:42.685 14:10:21 -- host/multipath.sh@69 -- # port=4421 00:27:42.685 14:10:21 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:27:42.685 14:10:21 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:27:42.685 14:10:21 -- host/multipath.sh@72 -- # kill 90165 00:27:42.685 14:10:21 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:42.685 14:10:21 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:42.685 [2024-04-26 14:10:22.139065] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139142] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139165] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139213] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139290] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139356] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139366] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139411] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139433] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139443] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139497] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139542] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139564] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.685 [2024-04-26 14:10:22.139575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139586] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139619] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139630] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139684] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139695] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139749] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139836] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139848] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139859] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139870] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139925] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139979] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.139990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.140002] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.140012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.140023] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.140034] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.140047] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.140058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.140069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.140080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.140092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.140103] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.140114] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.140125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.140136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.140147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 [2024-04-26 14:10:22.140188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(5) to be set 00:27:42.686 14:10:22 -- host/multipath.sh@101 -- # sleep 1 00:27:43.624 14:10:23 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:27:43.624 14:10:23 -- host/multipath.sh@65 -- # dtrace_pid=90296 00:27:43.624 14:10:23 -- host/multipath.sh@66 -- # sleep 6 00:27:43.624 14:10:23 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89457 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:50.190 14:10:29 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:27:50.190 14:10:29 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:27:50.190 14:10:29 -- host/multipath.sh@67 -- # active_port=4420 00:27:50.190 14:10:29 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:50.190 Attaching 4 probes... 00:27:50.190 @path[10.0.0.2, 4420]: 16936 00:27:50.190 @path[10.0.0.2, 4420]: 17307 00:27:50.190 @path[10.0.0.2, 4420]: 17168 00:27:50.190 @path[10.0.0.2, 4420]: 17929 00:27:50.190 @path[10.0.0.2, 4420]: 18222 00:27:50.190 14:10:29 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:27:50.191 14:10:29 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:27:50.191 14:10:29 -- host/multipath.sh@69 -- # sed -n 1p 00:27:50.191 14:10:29 -- host/multipath.sh@69 -- # port=4420 00:27:50.191 14:10:29 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:27:50.191 14:10:29 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:27:50.191 14:10:29 -- host/multipath.sh@72 -- # kill 90296 00:27:50.191 14:10:29 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:50.191 14:10:29 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:50.191 [2024-04-26 14:10:29.597406] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:50.191 14:10:29 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:50.191 14:10:29 -- host/multipath.sh@111 -- # sleep 6 00:27:56.754 14:10:35 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:27:56.754 14:10:35 -- host/multipath.sh@65 -- # dtrace_pid=90494 00:27:56.754 14:10:35 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89457 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:27:56.754 14:10:35 -- host/multipath.sh@66 -- # sleep 6 00:28:03.320 14:10:41 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:28:03.320 14:10:41 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:28:03.320 14:10:42 -- host/multipath.sh@67 -- # active_port=4421 00:28:03.320 14:10:42 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:03.320 Attaching 4 probes... 00:28:03.320 @path[10.0.0.2, 4421]: 17619 00:28:03.320 @path[10.0.0.2, 4421]: 17783 00:28:03.320 @path[10.0.0.2, 4421]: 18033 00:28:03.320 @path[10.0.0.2, 4421]: 18609 00:28:03.320 @path[10.0.0.2, 4421]: 17041 00:28:03.320 14:10:42 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:28:03.320 14:10:42 -- host/multipath.sh@69 -- # sed -n 1p 00:28:03.320 14:10:42 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:28:03.320 14:10:42 -- host/multipath.sh@69 -- # port=4421 00:28:03.320 14:10:42 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:28:03.320 14:10:42 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:28:03.320 14:10:42 -- host/multipath.sh@72 -- # kill 90494 00:28:03.320 14:10:42 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:03.320 14:10:42 -- host/multipath.sh@114 -- # killprocess 89556 00:28:03.320 14:10:42 -- common/autotest_common.sh@936 -- # '[' -z 89556 ']' 00:28:03.320 14:10:42 -- common/autotest_common.sh@940 -- # kill -0 89556 00:28:03.320 14:10:42 -- common/autotest_common.sh@941 -- # uname 00:28:03.320 14:10:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:03.320 14:10:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89556 00:28:03.320 14:10:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:28:03.320 14:10:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:28:03.320 killing process with pid 89556 00:28:03.320 14:10:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89556' 00:28:03.320 14:10:42 -- common/autotest_common.sh@955 -- # kill 89556 00:28:03.320 14:10:42 -- common/autotest_common.sh@960 -- # wait 89556 00:28:03.320 Connection closed with partial response: 00:28:03.320 00:28:03.320 00:28:03.903 14:10:43 -- host/multipath.sh@116 -- # wait 89556 00:28:03.903 14:10:43 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:03.904 [2024-04-26 14:09:45.753658] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:28:03.904 [2024-04-26 14:09:45.753886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89556 ] 00:28:03.904 [2024-04-26 14:09:45.921687] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.904 [2024-04-26 14:09:46.163076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.904 Running I/O for 90 seconds... 00:28:03.904 [2024-04-26 14:09:55.651944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.904 [2024-04-26 14:09:55.652054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.652168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:33192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.904 [2024-04-26 14:09:55.652194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.652223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.904 [2024-04-26 14:09:55.652241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.652267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.904 [2024-04-26 14:09:55.652284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.652310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:33216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.904 [2024-04-26 14:09:55.652328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.652354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.904 [2024-04-26 14:09:55.652371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.652397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.904 [2024-04-26 14:09:55.652414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.652439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.904 [2024-04-26 14:09:55.652457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.653284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.904 [2024-04-26 14:09:55.653320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.653353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.904 [2024-04-26 14:09:55.653370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.653407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.904 [2024-04-26 14:09:55.653440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.653468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.904 [2024-04-26 14:09:55.653485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.653510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:33280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.904 [2024-04-26 14:09:55.653528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.653554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:33288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.904 [2024-04-26 14:09:55.653571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.653597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.904 [2024-04-26 14:09:55.653614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.653640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.653657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.653683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.653701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.653726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.653744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.653769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.653786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.653826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.653843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.653869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.653886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.653911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.653928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.653953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.653970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.654969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.654995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.655038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.655092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.655141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.655194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.655237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.655279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.655321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.655363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.655404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.655446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.655488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.655530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.655571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.655614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.655661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.655703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.655745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.655787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.655804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.656515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.656546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.656577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.656594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.656618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.656634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:03.904 [2024-04-26 14:09:55.656658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.904 [2024-04-26 14:09:55.656674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.656699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.656715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.656738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.656754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.656778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.656794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.656818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.656835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.656868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.656884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.656909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:33008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.656924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.656947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:33016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.656963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.656987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:33048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:33072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:33104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:33112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:33168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:33176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:09:55.657771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.657827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:33312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.657877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.657917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.657957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.657997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:33344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:33368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:33400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:33416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:33424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:33448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.658967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:33488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.658985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:09:55.659011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:33496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:09:55.659029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.905 [2024-04-26 14:10:02.126084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:10:02.126230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:10:02.126273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:10:02.126312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:10:02.126352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:10:02.126391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:10:02.126430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:10:02.126468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:10:02.126506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:10:02.126545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:10:02.126583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:10:02.126623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:10:02.126663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:10:02.126714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:10:02.126754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:10:02.126797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:10:02.126836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:10:02.126876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.905 [2024-04-26 14:10:02.126916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:03.905 [2024-04-26 14:10:02.126940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.126955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.126979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.126994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.127017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.127033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.127056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.127072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.127094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.127110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.127133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.127149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.127194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:110936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.127210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.127233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.127250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.127273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.127289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.127312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.127328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.127350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.127366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.127389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.127405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.127429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:110984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.127446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.127470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.127486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.127511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.127527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.128620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:111008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.128655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.128688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.128704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.128732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.128748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.128775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.128806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.128834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.128851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.128878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.128894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.128921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.128937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.128964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.128980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:111112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:111136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:111152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:111176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:111184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.129956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.129984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.130001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.130028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.130045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.130074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.130091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.130702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.130734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.130789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.130811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.130842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.130860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.130890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.130907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.130938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.130955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.130986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.131003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.131033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.131051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.131082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.131099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.131142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.131176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.131207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.131224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.131255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.131272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:03.906 [2024-04-26 14:10:02.131303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.906 [2024-04-26 14:10:02.131320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:02.131351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:02.131367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:02.131398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:02.131415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:02.131445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:02.131462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:02.131492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:02.131509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:02.131539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:02.131556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:02.131587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:02.131604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.042646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:09.042720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.042788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:09.042808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.042834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:09.042872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.042898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:09.042915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.042940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:09.042957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.042982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:09.042999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.043023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:09.043039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.043062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:09.043078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.044407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:09.044452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.044489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:09.044506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.044532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:09.044551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.044577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:09.044594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.044621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:09.044638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.044664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:09.044682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.044708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.044736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.044764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.044781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.044807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.044824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.044850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.044867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.044893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.044909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.044935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.044952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.044978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.044995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.045930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.045947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.046076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.046099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.046129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.046146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.046186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.046203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.046230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.907 [2024-04-26 14:10:09.046247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.046273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.046290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.046317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.046334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.046361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.046378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.046405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.046422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.046449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.046466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.046499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.046517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.046543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.046560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.046586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.046603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.046628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.046645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.046672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.046689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.046716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.046733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.046761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.907 [2024-04-26 14:10:09.046778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:03.907 [2024-04-26 14:10:09.046805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.046822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.046847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.046864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.046889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.046907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.046933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.046950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.046976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.046993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.047040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.047085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.047129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.047184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.047227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.047271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.047314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.908 [2024-04-26 14:10:09.047357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.908 [2024-04-26 14:10:09.047402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.908 [2024-04-26 14:10:09.047446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.908 [2024-04-26 14:10:09.047490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.908 [2024-04-26 14:10:09.047534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.908 [2024-04-26 14:10:09.047583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.908 [2024-04-26 14:10:09.047627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.908 [2024-04-26 14:10:09.047671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.908 [2024-04-26 14:10:09.047820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.908 [2024-04-26 14:10:09.047869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.047915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.047961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.047990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.048963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.048993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.908 [2024-04-26 14:10:09.049953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.908 [2024-04-26 14:10:09.049981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-04-26 14:10:09.050003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:09.050032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-04-26 14:10:09.050048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:09.050077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-04-26 14:10:09.050094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:09.050122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-04-26 14:10:09.050139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.141259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-04-26 14:10:22.141316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.141353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.141375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.141399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.141419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.141441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.141461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.141484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.141504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.141526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.141546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.141568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.141588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.141610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.141630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.141651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.141671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.141716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.141740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.141762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.141793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.141815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.141835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.141857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.141877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.141899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.141919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.141940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.141960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.141982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.142002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.142043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.142085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.142127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.142182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.142224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.142278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.142322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.142364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.142406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.142448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.142490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.142532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.142574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.142618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.142660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.142719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.142761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-04-26 14:10:22.142812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-04-26 14:10:22.142861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-04-26 14:10:22.142903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-04-26 14:10:22.142945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.142967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-04-26 14:10:22.142987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-04-26 14:10:22.143030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-04-26 14:10:22.143072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-04-26 14:10:22.143114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.909 [2024-04-26 14:10:22.143169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.143212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.143254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.143298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.143341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.143384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.143431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.143474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.143517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.143559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.143602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.143643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.143685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.143727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.143773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.143819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.143861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.143903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.909 [2024-04-26 14:10:22.143925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.909 [2024-04-26 14:10:22.143951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.143972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.143992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:51000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:51032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:51048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:51064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.144976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.144996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:51096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:51112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:51152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:51168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:51184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:51208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:51232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:51248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.145981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.145996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.146014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.146031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.146048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.146065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.146083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.146099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.146121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:51304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.146137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.146168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.146185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.146203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:51320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.910 [2024-04-26 14:10:22.146219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.146237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.910 [2024-04-26 14:10:22.146254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.146272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.910 [2024-04-26 14:10:22.146288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.146306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.910 [2024-04-26 14:10:22.146323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.146349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.910 [2024-04-26 14:10:22.146366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.146384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.910 [2024-04-26 14:10:22.146401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.146421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.910 [2024-04-26 14:10:22.146437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.146455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.910 [2024-04-26 14:10:22.146471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.146490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.910 [2024-04-26 14:10:22.146506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.146524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.910 [2024-04-26 14:10:22.146540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.146558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.910 [2024-04-26 14:10:22.146579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.146598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.910 [2024-04-26 14:10:22.146614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.910 [2024-04-26 14:10:22.146633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.911 [2024-04-26 14:10:22.146649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.911 [2024-04-26 14:10:22.146668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.911 [2024-04-26 14:10:22.146684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.911 [2024-04-26 14:10:22.146702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.911 [2024-04-26 14:10:22.146718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.911 [2024-04-26 14:10:22.146761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:03.911 [2024-04-26 14:10:22.146776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:03.911 [2024-04-26 14:10:22.146791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50496 len:8 PRP1 0x0 PRP2 0x0 00:28:03.911 [2024-04-26 14:10:22.146809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.911 [2024-04-26 14:10:22.147061] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007a40 was disconnected and freed. reset controller. 00:28:03.911 [2024-04-26 14:10:22.148600] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.911 [2024-04-26 14:10:22.148713] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006a40 (9): Bad file descriptor 00:28:03.911 [2024-04-26 14:10:22.148855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.911 [2024-04-26 14:10:22.148916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.911 [2024-04-26 14:10:22.148938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000006a40 with addr=10.0.0.2, port=4421 00:28:03.911 [2024-04-26 14:10:22.148963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006a40 is same with the state(5) to be set 00:28:03.911 [2024-04-26 14:10:22.148992] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006a40 (9): Bad file descriptor 00:28:03.911 [2024-04-26 14:10:22.149017] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.911 [2024-04-26 14:10:22.149034] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.911 [2024-04-26 14:10:22.149052] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.911 [2024-04-26 14:10:22.149084] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.911 [2024-04-26 14:10:22.149100] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.911 [2024-04-26 14:10:32.224882] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:03.911 Received shutdown signal, test time was about 54.741818 seconds 00:28:03.911 00:28:03.911 Latency(us) 00:28:03.911 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.911 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:03.911 Verification LBA range: start 0x0 length 0x4000 00:28:03.911 Nvme0n1 : 54.74 7513.85 29.35 0.00 0.00 17015.88 320.77 7061253.96 00:28:03.911 =================================================================================================================== 00:28:03.911 Total : 7513.85 29.35 0.00 0.00 17015.88 320.77 7061253.96 00:28:03.911 14:10:43 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:04.176 14:10:43 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:28:04.176 14:10:43 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:04.176 14:10:43 -- host/multipath.sh@125 -- # nvmftestfini 00:28:04.176 14:10:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:04.176 14:10:43 -- nvmf/common.sh@117 -- # sync 00:28:04.176 14:10:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:04.176 14:10:43 -- nvmf/common.sh@120 -- # set +e 00:28:04.176 14:10:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:04.176 14:10:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:04.176 rmmod nvme_tcp 00:28:04.176 rmmod nvme_fabrics 00:28:04.176 rmmod nvme_keyring 00:28:04.176 14:10:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:04.176 14:10:43 -- nvmf/common.sh@124 -- # set -e 00:28:04.176 14:10:43 -- nvmf/common.sh@125 -- # return 0 00:28:04.176 14:10:43 -- nvmf/common.sh@478 -- # '[' -n 89457 ']' 00:28:04.176 14:10:43 -- nvmf/common.sh@479 -- # killprocess 89457 00:28:04.176 14:10:43 -- common/autotest_common.sh@936 -- # '[' -z 89457 ']' 00:28:04.176 14:10:43 -- common/autotest_common.sh@940 -- # kill -0 89457 00:28:04.176 14:10:43 -- common/autotest_common.sh@941 -- # uname 00:28:04.176 14:10:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:04.176 14:10:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89457 00:28:04.176 14:10:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:04.176 14:10:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:04.176 killing process with pid 89457 00:28:04.176 14:10:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89457' 00:28:04.176 14:10:43 -- common/autotest_common.sh@955 -- # kill 89457 00:28:04.176 14:10:43 -- common/autotest_common.sh@960 -- # wait 89457 00:28:05.551 14:10:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:05.551 14:10:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:05.551 14:10:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:05.551 14:10:45 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:05.551 14:10:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:05.551 14:10:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.551 14:10:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.551 14:10:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.810 14:10:45 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:05.810 00:28:05.810 real 1m2.438s 00:28:05.810 user 2m52.789s 00:28:05.810 sys 0m15.507s 00:28:05.810 14:10:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:05.810 14:10:45 -- common/autotest_common.sh@10 -- # set +x 00:28:05.810 ************************************ 00:28:05.810 END TEST nvmf_multipath 00:28:05.810 ************************************ 00:28:05.810 14:10:45 -- nvmf/nvmf.sh@115 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:05.810 14:10:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:05.810 14:10:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:05.810 14:10:45 -- common/autotest_common.sh@10 -- # set +x 00:28:05.810 ************************************ 00:28:05.810 START TEST nvmf_timeout 00:28:05.810 ************************************ 00:28:05.810 14:10:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:28:06.070 * Looking for test storage... 00:28:06.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:06.070 14:10:45 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:06.070 14:10:45 -- nvmf/common.sh@7 -- # uname -s 00:28:06.070 14:10:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.070 14:10:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.070 14:10:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.070 14:10:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.070 14:10:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.070 14:10:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.070 14:10:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.070 14:10:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.070 14:10:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.070 14:10:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.070 14:10:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:28:06.070 14:10:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:28:06.070 14:10:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.070 14:10:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.070 14:10:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:06.070 14:10:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.070 14:10:45 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:06.070 14:10:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.070 14:10:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.070 14:10:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.070 14:10:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.070 14:10:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.070 14:10:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.070 14:10:45 -- paths/export.sh@5 -- # export PATH 00:28:06.070 14:10:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.070 14:10:45 -- nvmf/common.sh@47 -- # : 0 00:28:06.070 14:10:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:06.070 14:10:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:06.070 14:10:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.070 14:10:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.070 14:10:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.070 14:10:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:06.070 14:10:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:06.070 14:10:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:06.070 14:10:45 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:06.070 14:10:45 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:06.070 14:10:45 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:06.070 14:10:45 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:28:06.070 14:10:45 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:06.070 14:10:45 -- host/timeout.sh@19 -- # nvmftestinit 00:28:06.070 14:10:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:06.070 14:10:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.070 14:10:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:06.070 14:10:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:06.070 14:10:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:06.070 14:10:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.070 14:10:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:06.070 14:10:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.070 14:10:45 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:28:06.070 14:10:45 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:28:06.070 14:10:45 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:28:06.070 14:10:45 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:28:06.070 14:10:45 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:28:06.070 14:10:45 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:28:06.070 14:10:45 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:06.070 14:10:45 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:06.070 14:10:45 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:06.070 14:10:45 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:06.070 14:10:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:06.070 14:10:45 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:06.070 14:10:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:06.070 14:10:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:06.070 14:10:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:06.070 14:10:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:06.070 14:10:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:06.070 14:10:45 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:06.070 14:10:45 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:06.070 14:10:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:06.070 Cannot find device "nvmf_tgt_br" 00:28:06.071 14:10:45 -- nvmf/common.sh@155 -- # true 00:28:06.071 14:10:45 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:06.071 Cannot find device "nvmf_tgt_br2" 00:28:06.071 14:10:45 -- nvmf/common.sh@156 -- # true 00:28:06.071 14:10:45 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:06.071 14:10:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:06.071 Cannot find device "nvmf_tgt_br" 00:28:06.071 14:10:45 -- nvmf/common.sh@158 -- # true 00:28:06.071 14:10:45 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:06.071 Cannot find device "nvmf_tgt_br2" 00:28:06.071 14:10:45 -- nvmf/common.sh@159 -- # true 00:28:06.071 14:10:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:06.352 14:10:45 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:06.352 14:10:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:06.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:06.352 14:10:45 -- nvmf/common.sh@162 -- # true 00:28:06.352 14:10:45 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:06.352 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:06.352 14:10:45 -- nvmf/common.sh@163 -- # true 00:28:06.352 14:10:45 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:06.352 14:10:45 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:06.352 14:10:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:06.352 14:10:45 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:06.352 14:10:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:06.352 14:10:45 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:06.352 14:10:45 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:06.352 14:10:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:06.352 14:10:45 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:06.352 14:10:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:06.352 14:10:45 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:06.352 14:10:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:06.352 14:10:45 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:06.352 14:10:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:06.352 14:10:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:06.352 14:10:45 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:06.352 14:10:45 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:06.352 14:10:45 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:06.352 14:10:45 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:06.352 14:10:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:06.352 14:10:45 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:06.352 14:10:46 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:06.352 14:10:46 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:06.616 14:10:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:06.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:06.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:28:06.617 00:28:06.617 --- 10.0.0.2 ping statistics --- 00:28:06.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.617 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:28:06.617 14:10:46 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:06.617 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:06.617 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:28:06.617 00:28:06.617 --- 10.0.0.3 ping statistics --- 00:28:06.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.617 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:28:06.617 14:10:46 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:06.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:06.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:28:06.617 00:28:06.617 --- 10.0.0.1 ping statistics --- 00:28:06.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:06.617 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:28:06.617 14:10:46 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:06.617 14:10:46 -- nvmf/common.sh@422 -- # return 0 00:28:06.617 14:10:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:06.617 14:10:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:06.617 14:10:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:06.617 14:10:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:06.617 14:10:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:06.617 14:10:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:06.617 14:10:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:06.617 14:10:46 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:28:06.617 14:10:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:06.617 14:10:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:06.617 14:10:46 -- common/autotest_common.sh@10 -- # set +x 00:28:06.617 14:10:46 -- nvmf/common.sh@470 -- # nvmfpid=90853 00:28:06.617 14:10:46 -- nvmf/common.sh@471 -- # waitforlisten 90853 00:28:06.617 14:10:46 -- common/autotest_common.sh@817 -- # '[' -z 90853 ']' 00:28:06.617 14:10:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.617 14:10:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:06.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.617 14:10:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.617 14:10:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:06.617 14:10:46 -- common/autotest_common.sh@10 -- # set +x 00:28:06.617 14:10:46 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:28:06.617 [2024-04-26 14:10:46.176227] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:28:06.617 [2024-04-26 14:10:46.176342] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.876 [2024-04-26 14:10:46.350944] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:07.135 [2024-04-26 14:10:46.594738] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:07.135 [2024-04-26 14:10:46.594799] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:07.135 [2024-04-26 14:10:46.594817] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:07.135 [2024-04-26 14:10:46.594841] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:07.135 [2024-04-26 14:10:46.594855] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:07.135 [2024-04-26 14:10:46.595030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.135 [2024-04-26 14:10:46.595065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.703 14:10:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:07.703 14:10:47 -- common/autotest_common.sh@850 -- # return 0 00:28:07.703 14:10:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:07.703 14:10:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:07.703 14:10:47 -- common/autotest_common.sh@10 -- # set +x 00:28:07.703 14:10:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.703 14:10:47 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:07.703 14:10:47 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:07.703 [2024-04-26 14:10:47.363938] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.963 14:10:47 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:08.223 Malloc0 00:28:08.223 14:10:47 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:08.482 14:10:47 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:08.482 14:10:48 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:08.741 [2024-04-26 14:10:48.308619] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.741 14:10:48 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:08.741 14:10:48 -- host/timeout.sh@32 -- # bdevperf_pid=90943 00:28:08.741 14:10:48 -- host/timeout.sh@34 -- # waitforlisten 90943 /var/tmp/bdevperf.sock 00:28:08.741 14:10:48 -- common/autotest_common.sh@817 -- # '[' -z 90943 ']' 00:28:08.741 14:10:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:08.741 14:10:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:08.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:08.741 14:10:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:08.741 14:10:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:08.741 14:10:48 -- common/autotest_common.sh@10 -- # set +x 00:28:08.741 [2024-04-26 14:10:48.414725] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:28:08.741 [2024-04-26 14:10:48.414844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90943 ] 00:28:09.001 [2024-04-26 14:10:48.585954] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.260 [2024-04-26 14:10:48.822111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.826 14:10:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:09.826 14:10:49 -- common/autotest_common.sh@850 -- # return 0 00:28:09.826 14:10:49 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:09.826 14:10:49 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:10.083 NVMe0n1 00:28:10.083 14:10:49 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:10.083 14:10:49 -- host/timeout.sh@51 -- # rpc_pid=90986 00:28:10.083 14:10:49 -- host/timeout.sh@53 -- # sleep 1 00:28:10.341 Running I/O for 10 seconds... 00:28:11.281 14:10:50 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:11.281 [2024-04-26 14:10:50.936546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936623] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936634] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936658] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936701] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936722] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936733] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936765] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936776] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.936874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:28:11.281 [2024-04-26 14:10:50.937711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.281 [2024-04-26 14:10:50.937769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.281 [2024-04-26 14:10:50.937812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.281 [2024-04-26 14:10:50.937825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.281 [2024-04-26 14:10:50.937840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.281 [2024-04-26 14:10:50.937853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.281 [2024-04-26 14:10:50.937867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.281 [2024-04-26 14:10:50.937879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.281 [2024-04-26 14:10:50.937893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.281 [2024-04-26 14:10:50.937905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.281 [2024-04-26 14:10:50.937919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.281 [2024-04-26 14:10:50.937931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.281 [2024-04-26 14:10:50.937945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.281 [2024-04-26 14:10:50.937957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.281 [2024-04-26 14:10:50.937971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.281 [2024-04-26 14:10:50.937982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.281 [2024-04-26 14:10:50.937996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.282 [2024-04-26 14:10:50.938455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.282 [2024-04-26 14:10:50.938480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.282 [2024-04-26 14:10:50.938506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.282 [2024-04-26 14:10:50.938531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.282 [2024-04-26 14:10:50.938556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.282 [2024-04-26 14:10:50.938581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.938978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.938990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.939017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.939029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.939043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.282 [2024-04-26 14:10:50.939056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.282 [2024-04-26 14:10:50.939071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:11.283 [2024-04-26 14:10:50.939515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.939980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.939992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.940007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.940020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.940033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.940045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.940059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.940070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.940084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.940096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.940110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.283 [2024-04-26 14:10:50.940121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.283 [2024-04-26 14:10:50.940135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.940978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.940993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.941005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.941018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.941030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.941044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.941056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.941080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.941092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.941105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.941116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.941130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.941141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.941154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.284 [2024-04-26 14:10:50.941166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.284 [2024-04-26 14:10:50.941190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007040 is same with the state(5) to be set 00:28:11.284 [2024-04-26 14:10:50.941206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:11.284 [2024-04-26 14:10:50.941217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:11.284 [2024-04-26 14:10:50.941229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82880 len:8 PRP1 0x0 PRP2 0x0 00:28:11.285 [2024-04-26 14:10:50.941242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:11.285 [2024-04-26 14:10:50.941501] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007040 was disconnected and freed. reset controller. 00:28:11.285 [2024-04-26 14:10:50.941735] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:11.285 [2024-04-26 14:10:50.941847] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:28:11.285 [2024-04-26 14:10:50.941975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.285 [2024-04-26 14:10:50.942030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:11.285 [2024-04-26 14:10:50.942047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:28:11.285 [2024-04-26 14:10:50.942062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:28:11.285 [2024-04-26 14:10:50.942084] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:28:11.285 [2024-04-26 14:10:50.942109] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:11.285 [2024-04-26 14:10:50.942122] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:11.285 [2024-04-26 14:10:50.942136] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:11.285 [2024-04-26 14:10:50.942176] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:11.285 [2024-04-26 14:10:50.942190] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:11.544 14:10:50 -- host/timeout.sh@56 -- # sleep 2 00:28:13.443 [2024-04-26 14:10:52.939105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 14:10:52.939219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:13.443 [2024-04-26 14:10:52.939239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:28:13.443 [2024-04-26 14:10:52.939255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:28:13.443 [2024-04-26 14:10:52.939287] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:28:13.443 [2024-04-26 14:10:52.939308] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:13.443 [2024-04-26 14:10:52.939320] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:13.443 [2024-04-26 14:10:52.939334] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:13.443 [2024-04-26 14:10:52.939365] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:13.443 [2024-04-26 14:10:52.939377] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:13.443 14:10:52 -- host/timeout.sh@57 -- # get_controller 00:28:13.443 14:10:52 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:13.443 14:10:52 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:13.701 14:10:53 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:28:13.701 14:10:53 -- host/timeout.sh@58 -- # get_bdev 00:28:13.701 14:10:53 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:13.701 14:10:53 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:13.959 14:10:53 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:28:13.959 14:10:53 -- host/timeout.sh@61 -- # sleep 5 00:28:15.331 [2024-04-26 14:10:54.936301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.331 [2024-04-26 14:10:54.936405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.331 [2024-04-26 14:10:54.936423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:28:15.331 [2024-04-26 14:10:54.936440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:28:15.331 [2024-04-26 14:10:54.936474] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:28:15.331 [2024-04-26 14:10:54.936495] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:15.331 [2024-04-26 14:10:54.936508] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:15.331 [2024-04-26 14:10:54.936521] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:15.331 [2024-04-26 14:10:54.936553] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:15.331 [2024-04-26 14:10:54.936567] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:17.862 [2024-04-26 14:10:56.933390] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:18.432 00:28:18.432 Latency(us) 00:28:18.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.432 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:18.432 Verification LBA range: start 0x0 length 0x4000 00:28:18.432 NVMe0n1 : 8.12 1261.01 4.93 15.76 0.00 100490.21 2105.57 7061253.96 00:28:18.432 =================================================================================================================== 00:28:18.432 Total : 1261.01 4.93 15.76 0.00 100490.21 2105.57 7061253.96 00:28:18.432 0 00:28:19.000 14:10:58 -- host/timeout.sh@62 -- # get_controller 00:28:19.000 14:10:58 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:19.000 14:10:58 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:28:19.000 14:10:58 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:28:19.000 14:10:58 -- host/timeout.sh@63 -- # get_bdev 00:28:19.000 14:10:58 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:28:19.001 14:10:58 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:28:19.258 14:10:58 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:28:19.259 14:10:58 -- host/timeout.sh@65 -- # wait 90986 00:28:19.259 14:10:58 -- host/timeout.sh@67 -- # killprocess 90943 00:28:19.259 14:10:58 -- common/autotest_common.sh@936 -- # '[' -z 90943 ']' 00:28:19.259 14:10:58 -- common/autotest_common.sh@940 -- # kill -0 90943 00:28:19.259 14:10:58 -- common/autotest_common.sh@941 -- # uname 00:28:19.259 14:10:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:19.259 14:10:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90943 00:28:19.259 14:10:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:28:19.259 14:10:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:28:19.259 killing process with pid 90943 00:28:19.259 Received shutdown signal, test time was about 9.057462 seconds 00:28:19.259 00:28:19.259 Latency(us) 00:28:19.259 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.259 =================================================================================================================== 00:28:19.259 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:19.259 14:10:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90943' 00:28:19.259 14:10:58 -- common/autotest_common.sh@955 -- # kill 90943 00:28:19.259 14:10:58 -- common/autotest_common.sh@960 -- # wait 90943 00:28:20.640 14:11:00 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.640 [2024-04-26 14:11:00.303936] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.898 14:11:00 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:28:20.898 14:11:00 -- host/timeout.sh@74 -- # bdevperf_pid=91156 00:28:20.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:20.898 14:11:00 -- host/timeout.sh@76 -- # waitforlisten 91156 /var/tmp/bdevperf.sock 00:28:20.898 14:11:00 -- common/autotest_common.sh@817 -- # '[' -z 91156 ']' 00:28:20.898 14:11:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:20.898 14:11:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:20.898 14:11:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:20.898 14:11:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:20.898 14:11:00 -- common/autotest_common.sh@10 -- # set +x 00:28:20.898 [2024-04-26 14:11:00.416198] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:28:20.898 [2024-04-26 14:11:00.416558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91156 ] 00:28:21.156 [2024-04-26 14:11:00.588096] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.156 [2024-04-26 14:11:00.818642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:21.724 14:11:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:21.724 14:11:01 -- common/autotest_common.sh@850 -- # return 0 00:28:21.724 14:11:01 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:28:21.983 14:11:01 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:28:22.243 NVMe0n1 00:28:22.243 14:11:01 -- host/timeout.sh@84 -- # rpc_pid=91199 00:28:22.243 14:11:01 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:22.243 14:11:01 -- host/timeout.sh@86 -- # sleep 1 00:28:22.243 Running I/O for 10 seconds... 00:28:23.218 14:11:02 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:23.510 [2024-04-26 14:11:02.953736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953852] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953861] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953902] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953912] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953932] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953942] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953951] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953971] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.953991] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954039] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954049] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954059] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954121] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954141] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954174] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954184] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954196] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954206] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954276] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954296] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.510 [2024-04-26 14:11:02.954383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954393] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954413] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954433] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954442] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954589] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954598] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.954617] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:28:23.511 [2024-04-26 14:11:02.955413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.955461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.955514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.955540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.955565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.955589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.955613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.955638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.511 [2024-04-26 14:11:02.955662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.511 [2024-04-26 14:11:02.955687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.511 [2024-04-26 14:11:02.955710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.511 [2024-04-26 14:11:02.955734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.511 [2024-04-26 14:11:02.955758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.511 [2024-04-26 14:11:02.955782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.511 [2024-04-26 14:11:02.955805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.955829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.955852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.955880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.955904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.955928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.955952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.955975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.955988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.955999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.956012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.956023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.956037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.956048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.956061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.956072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.956084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.956095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.956108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.956120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.511 [2024-04-26 14:11:02.956133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.511 [2024-04-26 14:11:02.956143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.512 [2024-04-26 14:11:02.956792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-04-26 14:11:02.956816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-04-26 14:11:02.956840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-04-26 14:11:02.956864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-04-26 14:11:02.956887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-04-26 14:11:02.956911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-04-26 14:11:02.956934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-04-26 14:11:02.956957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-04-26 14:11:02.956980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.956993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-04-26 14:11:02.957004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.957017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-04-26 14:11:02.957028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.957041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-04-26 14:11:02.957053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.957066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-04-26 14:11:02.957077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.957090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.512 [2024-04-26 14:11:02.957101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.512 [2024-04-26 14:11:02.957114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.957978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.957991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:23.513 [2024-04-26 14:11:02.958002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.958014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.513 [2024-04-26 14:11:02.958025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.958037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.513 [2024-04-26 14:11:02.958048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.513 [2024-04-26 14:11:02.958061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.513 [2024-04-26 14:11:02.958072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.514 [2024-04-26 14:11:02.958570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000006e40 is same with the state(5) to be set 00:28:23.514 [2024-04-26 14:11:02.958597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:23.514 [2024-04-26 14:11:02.958608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:23.514 [2024-04-26 14:11:02.958620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87352 len:8 PRP1 0x0 PRP2 0x0 00:28:23.514 [2024-04-26 14:11:02.958632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.514 [2024-04-26 14:11:02.958881] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000006e40 was disconnected and freed. reset controller. 00:28:23.514 [2024-04-26 14:11:02.959086] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.514 [2024-04-26 14:11:02.959192] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:28:23.514 [2024-04-26 14:11:02.959295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.514 [2024-04-26 14:11:02.959338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:23.514 [2024-04-26 14:11:02.959354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004c40 with addr=10.0.0.2, port=4420 00:28:23.514 [2024-04-26 14:11:02.959368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:28:23.514 [2024-04-26 14:11:02.959388] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:28:23.514 [2024-04-26 14:11:02.959423] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:23.514 [2024-04-26 14:11:02.959436] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:23.514 [2024-04-26 14:11:02.959450] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.514 [2024-04-26 14:11:02.959474] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:23.514 [2024-04-26 14:11:02.959487] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.514 14:11:02 -- host/timeout.sh@90 -- # sleep 1 00:28:24.449 [2024-04-26 14:11:03.958029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.449 [2024-04-26 14:11:03.958135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.449 [2024-04-26 14:11:03.958165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004c40 with addr=10.0.0.2, port=4420 00:28:24.449 [2024-04-26 14:11:03.958183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:28:24.449 [2024-04-26 14:11:03.958217] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:28:24.449 [2024-04-26 14:11:03.958247] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.449 [2024-04-26 14:11:03.958259] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.449 [2024-04-26 14:11:03.958274] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.449 [2024-04-26 14:11:03.958307] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.449 [2024-04-26 14:11:03.958320] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.449 14:11:03 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.708 [2024-04-26 14:11:04.156854] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.708 14:11:04 -- host/timeout.sh@92 -- # wait 91199 00:28:25.642 [2024-04-26 14:11:04.978066] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:32.233 00:28:32.233 Latency(us) 00:28:32.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.233 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:32.233 Verification LBA range: start 0x0 length 0x4000 00:28:32.233 NVMe0n1 : 10.01 6963.73 27.20 0.00 0.00 18349.48 1315.98 3032026.99 00:28:32.233 =================================================================================================================== 00:28:32.233 Total : 6963.73 27.20 0.00 0.00 18349.48 1315.98 3032026.99 00:28:32.233 0 00:28:32.233 14:11:11 -- host/timeout.sh@97 -- # rpc_pid=91316 00:28:32.233 14:11:11 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:32.233 14:11:11 -- host/timeout.sh@98 -- # sleep 1 00:28:32.491 Running I/O for 10 seconds... 00:28:33.459 14:11:12 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.459 [2024-04-26 14:11:13.048254] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048327] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048339] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048383] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048437] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048458] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048468] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048500] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048566] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048610] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048620] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048642] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048652] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048663] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048674] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048696] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.459 [2024-04-26 14:11:13.048706] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048717] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048728] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048771] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048837] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048847] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048867] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048887] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048907] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048917] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048927] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048937] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048957] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048977] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.048997] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049007] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049018] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049058] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049069] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049127] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049155] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049166] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049220] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049241] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049262] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049272] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049304] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.049325] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:28:33.460 [2024-04-26 14:11:13.050265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.460 [2024-04-26 14:11:13.050320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.460 [2024-04-26 14:11:13.050350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.460 [2024-04-26 14:11:13.050363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.460 [2024-04-26 14:11:13.050379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.460 [2024-04-26 14:11:13.050392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.460 [2024-04-26 14:11:13.050406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.460 [2024-04-26 14:11:13.050419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.460 [2024-04-26 14:11:13.050433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.460 [2024-04-26 14:11:13.050445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.460 [2024-04-26 14:11:13.050459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.460 [2024-04-26 14:11:13.050471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.460 [2024-04-26 14:11:13.050485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.460 [2024-04-26 14:11:13.050497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.460 [2024-04-26 14:11:13.050511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.460 [2024-04-26 14:11:13.050524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.460 [2024-04-26 14:11:13.050538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.460 [2024-04-26 14:11:13.050550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.460 [2024-04-26 14:11:13.050564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.460 [2024-04-26 14:11:13.050576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.460 [2024-04-26 14:11:13.050590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.460 [2024-04-26 14:11:13.050602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.460 [2024-04-26 14:11:13.050616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.460 [2024-04-26 14:11:13.050628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.460 [2024-04-26 14:11:13.050642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.460 [2024-04-26 14:11:13.050653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.460 [2024-04-26 14:11:13.050667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.460 [2024-04-26 14:11:13.050679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.460 [2024-04-26 14:11:13.050693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.460 [2024-04-26 14:11:13.050705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.050718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.050730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.050744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.050758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.050772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.050784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.050798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.050810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.050824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.050836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.050850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.050862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.050876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.050888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.050902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.050913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.050927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.050939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.050953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.050964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.050978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.050990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.461 [2024-04-26 14:11:13.051751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.461 [2024-04-26 14:11:13.051765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.051777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.051790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.051802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.051815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.051827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.051840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.051851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.051865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.051876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.051889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.051901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.051914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.051925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.051938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.051950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.051963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.051975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.051988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.051999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.052024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.052049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.052074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.052100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.052124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.052149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.052174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.052206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.462 [2024-04-26 14:11:13.052231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.462 [2024-04-26 14:11:13.052950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.462 [2024-04-26 14:11:13.052979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.052991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.463 [2024-04-26 14:11:13.053117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:33.463 [2024-04-26 14:11:13.053677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.053690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000009c40 is same with the state(5) to be set 00:28:33.463 [2024-04-26 14:11:13.053705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:33.463 [2024-04-26 14:11:13.053716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:33.463 [2024-04-26 14:11:13.053727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85896 len:8 PRP1 0x0 PRP2 0x0 00:28:33.463 [2024-04-26 14:11:13.053748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:33.463 [2024-04-26 14:11:13.054013] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000009c40 was disconnected and freed. reset controller. 00:28:33.463 [2024-04-26 14:11:13.054244] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.463 [2024-04-26 14:11:13.054327] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:28:33.463 [2024-04-26 14:11:13.054425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.463 [2024-04-26 14:11:13.054466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:33.463 [2024-04-26 14:11:13.054482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004c40 with addr=10.0.0.2, port=4420 00:28:33.463 [2024-04-26 14:11:13.054496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:28:33.463 [2024-04-26 14:11:13.054516] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:28:33.463 [2024-04-26 14:11:13.054534] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:33.463 [2024-04-26 14:11:13.054546] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:33.463 [2024-04-26 14:11:13.054560] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:33.463 [2024-04-26 14:11:13.054584] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:33.463 [2024-04-26 14:11:13.054597] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:33.463 14:11:13 -- host/timeout.sh@101 -- # sleep 3 00:28:34.401 [2024-04-26 14:11:14.053129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.401 [2024-04-26 14:11:14.053258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:34.401 [2024-04-26 14:11:14.053279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004c40 with addr=10.0.0.2, port=4420 00:28:34.401 [2024-04-26 14:11:14.053297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:28:34.401 [2024-04-26 14:11:14.053327] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:28:34.401 [2024-04-26 14:11:14.053348] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:34.401 [2024-04-26 14:11:14.053361] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:34.402 [2024-04-26 14:11:14.053375] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:34.402 [2024-04-26 14:11:14.053408] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:34.402 [2024-04-26 14:11:14.053420] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:35.777 [2024-04-26 14:11:15.051948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.777 [2024-04-26 14:11:15.052042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.777 [2024-04-26 14:11:15.052061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004c40 with addr=10.0.0.2, port=4420 00:28:35.777 [2024-04-26 14:11:15.052078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:28:35.777 [2024-04-26 14:11:15.052107] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:28:35.777 [2024-04-26 14:11:15.052127] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:35.777 [2024-04-26 14:11:15.052139] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:35.777 [2024-04-26 14:11:15.052163] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:35.777 [2024-04-26 14:11:15.052194] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.777 [2024-04-26 14:11:15.052207] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:36.712 [2024-04-26 14:11:16.053129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.712 [2024-04-26 14:11:16.053237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:36.712 [2024-04-26 14:11:16.053256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004c40 with addr=10.0.0.2, port=4420 00:28:36.712 [2024-04-26 14:11:16.053273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004c40 is same with the state(5) to be set 00:28:36.712 [2024-04-26 14:11:16.053484] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:28:36.712 [2024-04-26 14:11:16.053679] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:36.712 [2024-04-26 14:11:16.053692] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:36.712 [2024-04-26 14:11:16.053705] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:36.712 [2024-04-26 14:11:16.056798] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:36.712 [2024-04-26 14:11:16.056844] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:36.712 14:11:16 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:36.712 [2024-04-26 14:11:16.256991] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.712 14:11:16 -- host/timeout.sh@103 -- # wait 91316 00:28:37.660 [2024-04-26 14:11:17.093726] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:42.951 00:28:42.951 Latency(us) 00:28:42.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.951 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:42.951 Verification LBA range: start 0x0 length 0x4000 00:28:42.951 NVMe0n1 : 10.01 5870.43 22.93 4710.69 0.00 12071.54 523.10 3018551.31 00:28:42.951 =================================================================================================================== 00:28:42.951 Total : 5870.43 22.93 4710.69 0.00 12071.54 0.00 3018551.31 00:28:42.951 0 00:28:42.951 14:11:21 -- host/timeout.sh@105 -- # killprocess 91156 00:28:42.951 14:11:21 -- common/autotest_common.sh@936 -- # '[' -z 91156 ']' 00:28:42.951 14:11:21 -- common/autotest_common.sh@940 -- # kill -0 91156 00:28:42.951 14:11:21 -- common/autotest_common.sh@941 -- # uname 00:28:42.951 14:11:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:42.951 14:11:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91156 00:28:42.951 killing process with pid 91156 00:28:42.951 Received shutdown signal, test time was about 10.000000 seconds 00:28:42.951 00:28:42.951 Latency(us) 00:28:42.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.951 =================================================================================================================== 00:28:42.951 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:42.951 14:11:22 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:28:42.951 14:11:22 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:28:42.951 14:11:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91156' 00:28:42.951 14:11:22 -- common/autotest_common.sh@955 -- # kill 91156 00:28:42.951 14:11:22 -- common/autotest_common.sh@960 -- # wait 91156 00:28:43.518 14:11:23 -- host/timeout.sh@110 -- # bdevperf_pid=91454 00:28:43.518 14:11:23 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:28:43.518 14:11:23 -- host/timeout.sh@112 -- # waitforlisten 91454 /var/tmp/bdevperf.sock 00:28:43.518 14:11:23 -- common/autotest_common.sh@817 -- # '[' -z 91454 ']' 00:28:43.518 14:11:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:43.518 14:11:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:43.518 14:11:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:43.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:43.518 14:11:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:43.518 14:11:23 -- common/autotest_common.sh@10 -- # set +x 00:28:43.778 [2024-04-26 14:11:23.251192] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:28:43.778 [2024-04-26 14:11:23.251314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91454 ] 00:28:43.778 [2024-04-26 14:11:23.427026] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.036 [2024-04-26 14:11:23.663183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:44.602 14:11:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:44.602 14:11:24 -- common/autotest_common.sh@850 -- # return 0 00:28:44.602 14:11:24 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 91454 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:28:44.602 14:11:24 -- host/timeout.sh@116 -- # dtrace_pid=91482 00:28:44.602 14:11:24 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:28:44.602 14:11:24 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:28:44.860 NVMe0n1 00:28:45.117 14:11:24 -- host/timeout.sh@124 -- # rpc_pid=91530 00:28:45.117 14:11:24 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:45.117 14:11:24 -- host/timeout.sh@125 -- # sleep 1 00:28:45.117 Running I/O for 10 seconds... 00:28:46.109 14:11:25 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:46.109 [2024-04-26 14:11:25.719996] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720067] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720078] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720099] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720222] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720283] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720304] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720324] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720344] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720424] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720434] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720444] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720454] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720493] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720502] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720512] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720549] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720560] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720570] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720603] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720624] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720687] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720719] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720747] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720768] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720779] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720789] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720810] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720820] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720841] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720851] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720862] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720872] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.109 [2024-04-26 14:11:25.720924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.720935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.720945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.720969] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.720979] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.720989] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721010] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721020] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721030] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721040] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721090] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721139] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721158] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721178] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721207] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721237] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721318] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721338] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721348] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721358] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721378] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.721389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005080 is same with the state(5) to be set 00:28:46.110 [2024-04-26 14:11:25.722197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:125272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:35272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:30232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.722984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.722995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.110 [2024-04-26 14:11:25.723007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.110 [2024-04-26 14:11:25.723019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:34320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.723981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.723992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.724005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.724015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.724028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.724039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.724052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.724062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.724075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.724086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.724099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.724109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.724122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.724136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.724149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.724170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.724183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.724194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.724207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.724217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.724230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.724242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.724256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.724267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.724279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.724290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.111 [2024-04-26 14:11:25.724303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.111 [2024-04-26 14:11:25.724314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:121872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:48304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.724984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.724997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.725008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.725021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.725032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.725045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.725056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.725069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.725079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.725092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.725107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.725127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:34832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.725138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.725162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.725174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.725187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.725198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.725212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.725222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.725235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.725246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.725259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.725270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.725283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.725293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.725306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:92560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.725318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.725330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.725341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.725354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.112 [2024-04-26 14:11:25.725365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.725377] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007040 is same with the state(5) to be set 00:28:46.112 [2024-04-26 14:11:25.725393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:46.112 [2024-04-26 14:11:25.725404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:46.112 [2024-04-26 14:11:25.725415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17864 len:8 PRP1 0x0 PRP2 0x0 00:28:46.112 [2024-04-26 14:11:25.725428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.112 [2024-04-26 14:11:25.725658] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007040 was disconnected and freed. reset controller. 00:28:46.112 [2024-04-26 14:11:25.725823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.113 [2024-04-26 14:11:25.725848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.113 [2024-04-26 14:11:25.725863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.113 [2024-04-26 14:11:25.725874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.113 [2024-04-26 14:11:25.725887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.113 [2024-04-26 14:11:25.725899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.113 [2024-04-26 14:11:25.725912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:46.113 [2024-04-26 14:11:25.725923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.113 [2024-04-26 14:11:25.725934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:28:46.113 [2024-04-26 14:11:25.726164] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:46.113 [2024-04-26 14:11:25.726206] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:28:46.113 [2024-04-26 14:11:25.726305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.113 [2024-04-26 14:11:25.726354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.113 [2024-04-26 14:11:25.726375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:28:46.113 [2024-04-26 14:11:25.726388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:28:46.113 [2024-04-26 14:11:25.726407] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:28:46.113 [2024-04-26 14:11:25.726424] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:46.113 [2024-04-26 14:11:25.726437] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:46.113 [2024-04-26 14:11:25.726450] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:46.113 14:11:25 -- host/timeout.sh@128 -- # wait 91530 00:28:46.113 [2024-04-26 14:11:25.751251] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:46.113 [2024-04-26 14:11:25.751317] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.648 [2024-04-26 14:11:27.748328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.648 [2024-04-26 14:11:27.748424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.648 [2024-04-26 14:11:27.748443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:28:48.648 [2024-04-26 14:11:27.748460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:28:48.648 [2024-04-26 14:11:27.748488] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:28:48.648 [2024-04-26 14:11:27.748525] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.648 [2024-04-26 14:11:27.748538] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.648 [2024-04-26 14:11:27.748554] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.648 [2024-04-26 14:11:27.748586] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.648 [2024-04-26 14:11:27.748599] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.580 [2024-04-26 14:11:29.745539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-04-26 14:11:29.746034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.580 [2024-04-26 14:11:29.746119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000004e40 with addr=10.0.0.2, port=4420 00:28:50.580 [2024-04-26 14:11:29.746215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000004e40 is same with the state(5) to be set 00:28:50.580 [2024-04-26 14:11:29.746289] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004e40 (9): Bad file descriptor 00:28:50.580 [2024-04-26 14:11:29.746364] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.580 [2024-04-26 14:11:29.746419] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.580 [2024-04-26 14:11:29.746473] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.580 [2024-04-26 14:11:29.746544] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.580 [2024-04-26 14:11:29.746620] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:52.482 [2024-04-26 14:11:31.743529] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:53.052 00:28:53.052 Latency(us) 00:28:53.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.052 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:28:53.052 NVMe0n1 : 8.11 2556.17 9.99 15.78 0.00 49917.72 2158.21 7061253.96 00:28:53.052 =================================================================================================================== 00:28:53.052 Total : 2556.17 9.99 15.78 0.00 49917.72 2158.21 7061253.96 00:28:53.310 0 00:28:53.311 14:11:32 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:53.311 Attaching 5 probes... 00:28:53.311 1113.807855: reset bdev controller NVMe0 00:28:53.311 1113.909035: reconnect bdev controller NVMe0 00:28:53.311 3135.837501: reconnect delay bdev controller NVMe0 00:28:53.311 3135.861323: reconnect bdev controller NVMe0 00:28:53.311 5133.077899: reconnect delay bdev controller NVMe0 00:28:53.311 5133.101847: reconnect bdev controller NVMe0 00:28:53.311 7131.133957: reconnect delay bdev controller NVMe0 00:28:53.311 7131.161806: reconnect bdev controller NVMe0 00:28:53.311 14:11:32 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:28:53.311 14:11:32 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:28:53.311 14:11:32 -- host/timeout.sh@136 -- # kill 91482 00:28:53.311 14:11:32 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:28:53.311 14:11:32 -- host/timeout.sh@139 -- # killprocess 91454 00:28:53.311 14:11:32 -- common/autotest_common.sh@936 -- # '[' -z 91454 ']' 00:28:53.311 14:11:32 -- common/autotest_common.sh@940 -- # kill -0 91454 00:28:53.311 14:11:32 -- common/autotest_common.sh@941 -- # uname 00:28:53.311 14:11:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:53.311 14:11:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91454 00:28:53.311 killing process with pid 91454 00:28:53.311 Received shutdown signal, test time was about 8.188695 seconds 00:28:53.311 00:28:53.311 Latency(us) 00:28:53.311 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.311 =================================================================================================================== 00:28:53.311 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:53.311 14:11:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:28:53.311 14:11:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:28:53.311 14:11:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91454' 00:28:53.311 14:11:32 -- common/autotest_common.sh@955 -- # kill 91454 00:28:53.311 14:11:32 -- common/autotest_common.sh@960 -- # wait 91454 00:28:54.691 14:11:34 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:54.691 14:11:34 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:28:54.691 14:11:34 -- host/timeout.sh@145 -- # nvmftestfini 00:28:54.691 14:11:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:54.691 14:11:34 -- nvmf/common.sh@117 -- # sync 00:28:54.691 14:11:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:54.691 14:11:34 -- nvmf/common.sh@120 -- # set +e 00:28:54.691 14:11:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:54.691 14:11:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:54.691 rmmod nvme_tcp 00:28:54.691 rmmod nvme_fabrics 00:28:54.950 rmmod nvme_keyring 00:28:54.950 14:11:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:54.950 14:11:34 -- nvmf/common.sh@124 -- # set -e 00:28:54.950 14:11:34 -- nvmf/common.sh@125 -- # return 0 00:28:54.950 14:11:34 -- nvmf/common.sh@478 -- # '[' -n 90853 ']' 00:28:54.950 14:11:34 -- nvmf/common.sh@479 -- # killprocess 90853 00:28:54.950 14:11:34 -- common/autotest_common.sh@936 -- # '[' -z 90853 ']' 00:28:54.950 14:11:34 -- common/autotest_common.sh@940 -- # kill -0 90853 00:28:54.950 14:11:34 -- common/autotest_common.sh@941 -- # uname 00:28:54.950 14:11:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:54.950 14:11:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90853 00:28:54.950 killing process with pid 90853 00:28:54.950 14:11:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:54.950 14:11:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:54.950 14:11:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90853' 00:28:54.950 14:11:34 -- common/autotest_common.sh@955 -- # kill 90853 00:28:54.950 14:11:34 -- common/autotest_common.sh@960 -- # wait 90853 00:28:56.343 14:11:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:56.343 14:11:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:56.343 14:11:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:56.343 14:11:35 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:56.343 14:11:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:56.343 14:11:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.343 14:11:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:56.343 14:11:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.343 14:11:35 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:56.343 00:28:56.343 real 0m50.541s 00:28:56.343 user 2m24.807s 00:28:56.343 sys 0m5.808s 00:28:56.343 14:11:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:56.343 14:11:35 -- common/autotest_common.sh@10 -- # set +x 00:28:56.343 ************************************ 00:28:56.343 END TEST nvmf_timeout 00:28:56.343 ************************************ 00:28:56.601 14:11:36 -- nvmf/nvmf.sh@118 -- # [[ virt == phy ]] 00:28:56.601 14:11:36 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:28:56.601 14:11:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:56.601 14:11:36 -- common/autotest_common.sh@10 -- # set +x 00:28:56.601 14:11:36 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:28:56.601 00:28:56.601 real 13m48.975s 00:28:56.601 user 35m8.624s 00:28:56.601 sys 3m21.722s 00:28:56.601 14:11:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:56.601 14:11:36 -- common/autotest_common.sh@10 -- # set +x 00:28:56.601 ************************************ 00:28:56.601 END TEST nvmf_tcp 00:28:56.601 ************************************ 00:28:56.601 14:11:36 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:28:56.601 14:11:36 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:56.601 14:11:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:56.601 14:11:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:56.601 14:11:36 -- common/autotest_common.sh@10 -- # set +x 00:28:56.601 ************************************ 00:28:56.601 START TEST spdkcli_nvmf_tcp 00:28:56.601 ************************************ 00:28:56.601 14:11:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:56.860 * Looking for test storage... 00:28:56.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:28:56.860 14:11:36 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:28:56.860 14:11:36 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:28:56.860 14:11:36 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:28:56.860 14:11:36 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:56.860 14:11:36 -- nvmf/common.sh@7 -- # uname -s 00:28:56.860 14:11:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:56.860 14:11:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:56.860 14:11:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:56.860 14:11:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:56.860 14:11:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:56.860 14:11:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:56.860 14:11:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:56.860 14:11:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:56.860 14:11:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:56.860 14:11:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:56.860 14:11:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:28:56.860 14:11:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:28:56.860 14:11:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:56.860 14:11:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:56.860 14:11:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:56.860 14:11:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:56.860 14:11:36 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:56.860 14:11:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:56.860 14:11:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:56.860 14:11:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:56.860 14:11:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.860 14:11:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.860 14:11:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.860 14:11:36 -- paths/export.sh@5 -- # export PATH 00:28:56.860 14:11:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.860 14:11:36 -- nvmf/common.sh@47 -- # : 0 00:28:56.860 14:11:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:56.860 14:11:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:56.860 14:11:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:56.860 14:11:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:56.860 14:11:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:56.860 14:11:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:56.860 14:11:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:56.860 14:11:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:56.860 14:11:36 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:56.860 14:11:36 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:56.860 14:11:36 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:56.860 14:11:36 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:56.860 14:11:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:56.860 14:11:36 -- common/autotest_common.sh@10 -- # set +x 00:28:56.860 14:11:36 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:56.860 14:11:36 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=91784 00:28:56.860 14:11:36 -- spdkcli/common.sh@34 -- # waitforlisten 91784 00:28:56.860 14:11:36 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:56.860 14:11:36 -- common/autotest_common.sh@817 -- # '[' -z 91784 ']' 00:28:56.860 14:11:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.860 14:11:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:56.860 14:11:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.860 14:11:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:56.860 14:11:36 -- common/autotest_common.sh@10 -- # set +x 00:28:56.860 [2024-04-26 14:11:36.510998] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:28:56.860 [2024-04-26 14:11:36.511315] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91784 ] 00:28:57.119 [2024-04-26 14:11:36.679977] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:57.377 [2024-04-26 14:11:36.918298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.378 [2024-04-26 14:11:36.918331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.944 14:11:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:57.944 14:11:37 -- common/autotest_common.sh@850 -- # return 0 00:28:57.944 14:11:37 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:57.944 14:11:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:57.944 14:11:37 -- common/autotest_common.sh@10 -- # set +x 00:28:57.944 14:11:37 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:57.944 14:11:37 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:28:57.944 14:11:37 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:57.944 14:11:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:57.944 14:11:37 -- common/autotest_common.sh@10 -- # set +x 00:28:57.944 14:11:37 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:57.944 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:57.944 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:57.944 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:57.944 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:57.944 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:57.944 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:57.944 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:57.944 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:57.944 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:57.944 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:57.944 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:57.944 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:57.944 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:57.944 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:57.944 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:57.944 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:57.944 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:57.945 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:57.945 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:57.945 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:57.945 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:57.945 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:57.945 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:28:57.945 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:57.945 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:57.945 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:57.945 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:57.945 ' 00:28:58.203 [2024-04-26 14:11:37.744334] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:00.736 [2024-04-26 14:11:40.238530] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:02.111 [2024-04-26 14:11:41.546346] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:04.642 [2024-04-26 14:11:43.965769] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:06.547 [2024-04-26 14:11:46.056609] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:08.452 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:08.452 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:08.452 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:08.452 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:08.452 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:08.452 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:08.452 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:08.452 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:08.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:08.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:08.452 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:08.453 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:08.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:08.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:08.453 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:08.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:08.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:08.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:08.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:08.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:08.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:08.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:08.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:08.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:08.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:08.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:08.453 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:08.453 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:08.453 14:11:47 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:08.453 14:11:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:08.453 14:11:47 -- common/autotest_common.sh@10 -- # set +x 00:29:08.453 14:11:47 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:08.453 14:11:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:08.453 14:11:47 -- common/autotest_common.sh@10 -- # set +x 00:29:08.453 14:11:47 -- spdkcli/nvmf.sh@69 -- # check_match 00:29:08.453 14:11:47 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:29:08.713 14:11:48 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:08.713 14:11:48 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:08.713 14:11:48 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:08.713 14:11:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:08.713 14:11:48 -- common/autotest_common.sh@10 -- # set +x 00:29:08.713 14:11:48 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:08.713 14:11:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:08.713 14:11:48 -- common/autotest_common.sh@10 -- # set +x 00:29:08.713 14:11:48 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:08.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:08.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:08.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:08.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:08.713 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:08.713 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:08.713 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:08.713 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:08.713 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:08.713 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:08.713 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:08.713 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:08.713 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:08.713 ' 00:29:15.320 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:15.320 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:15.320 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:15.320 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:15.320 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:15.320 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:15.320 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:15.320 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:15.320 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:15.320 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:15.320 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:15.320 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:15.320 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:15.320 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:15.320 14:11:54 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:15.320 14:11:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:15.320 14:11:54 -- common/autotest_common.sh@10 -- # set +x 00:29:15.320 14:11:54 -- spdkcli/nvmf.sh@90 -- # killprocess 91784 00:29:15.320 14:11:54 -- common/autotest_common.sh@936 -- # '[' -z 91784 ']' 00:29:15.320 14:11:54 -- common/autotest_common.sh@940 -- # kill -0 91784 00:29:15.320 14:11:54 -- common/autotest_common.sh@941 -- # uname 00:29:15.320 14:11:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:15.320 14:11:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91784 00:29:15.320 killing process with pid 91784 00:29:15.320 14:11:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:15.320 14:11:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:15.320 14:11:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91784' 00:29:15.320 14:11:54 -- common/autotest_common.sh@955 -- # kill 91784 00:29:15.320 [2024-04-26 14:11:54.316048] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:15.320 14:11:54 -- common/autotest_common.sh@960 -- # wait 91784 00:29:16.255 14:11:55 -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:16.255 14:11:55 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:16.255 14:11:55 -- spdkcli/common.sh@13 -- # '[' -n 91784 ']' 00:29:16.256 14:11:55 -- spdkcli/common.sh@14 -- # killprocess 91784 00:29:16.256 Process with pid 91784 is not found 00:29:16.256 14:11:55 -- common/autotest_common.sh@936 -- # '[' -z 91784 ']' 00:29:16.256 14:11:55 -- common/autotest_common.sh@940 -- # kill -0 91784 00:29:16.256 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (91784) - No such process 00:29:16.256 14:11:55 -- common/autotest_common.sh@963 -- # echo 'Process with pid 91784 is not found' 00:29:16.256 14:11:55 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:16.256 14:11:55 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:16.256 14:11:55 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:16.256 00:29:16.256 real 0m19.352s 00:29:16.256 user 0m40.720s 00:29:16.256 sys 0m1.224s 00:29:16.256 14:11:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:16.256 14:11:55 -- common/autotest_common.sh@10 -- # set +x 00:29:16.256 ************************************ 00:29:16.256 END TEST spdkcli_nvmf_tcp 00:29:16.256 ************************************ 00:29:16.256 14:11:55 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:16.256 14:11:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:16.256 14:11:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:16.256 14:11:55 -- common/autotest_common.sh@10 -- # set +x 00:29:16.256 ************************************ 00:29:16.256 START TEST nvmf_identify_passthru 00:29:16.256 ************************************ 00:29:16.256 14:11:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:16.256 * Looking for test storage... 00:29:16.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:16.256 14:11:55 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:16.256 14:11:55 -- nvmf/common.sh@7 -- # uname -s 00:29:16.256 14:11:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.256 14:11:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.256 14:11:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.256 14:11:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.256 14:11:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.256 14:11:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.256 14:11:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.256 14:11:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.256 14:11:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.256 14:11:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.256 14:11:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:29:16.256 14:11:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:29:16.256 14:11:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.256 14:11:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.256 14:11:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:16.256 14:11:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.256 14:11:55 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:16.256 14:11:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.256 14:11:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.256 14:11:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.256 14:11:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.256 14:11:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.256 14:11:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.256 14:11:55 -- paths/export.sh@5 -- # export PATH 00:29:16.256 14:11:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.256 14:11:55 -- nvmf/common.sh@47 -- # : 0 00:29:16.256 14:11:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:16.256 14:11:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:16.256 14:11:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.256 14:11:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.256 14:11:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.256 14:11:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:16.256 14:11:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:16.256 14:11:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:16.256 14:11:55 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:16.515 14:11:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.515 14:11:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.515 14:11:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.515 14:11:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.515 14:11:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.515 14:11:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.515 14:11:55 -- paths/export.sh@5 -- # export PATH 00:29:16.515 14:11:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.515 14:11:55 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:16.515 14:11:55 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:16.515 14:11:55 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.515 14:11:55 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:16.515 14:11:55 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:16.515 14:11:55 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:16.515 14:11:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.515 14:11:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:16.515 14:11:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.515 14:11:55 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:29:16.515 14:11:55 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:29:16.515 14:11:55 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:29:16.515 14:11:55 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:29:16.515 14:11:55 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:29:16.515 14:11:55 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:29:16.515 14:11:55 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:16.515 14:11:55 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:16.515 14:11:55 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:16.515 14:11:55 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:16.515 14:11:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:16.515 14:11:55 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:16.515 14:11:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:16.515 14:11:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:16.515 14:11:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:16.515 14:11:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:16.515 14:11:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:16.515 14:11:55 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:16.515 14:11:55 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:16.515 14:11:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:16.515 Cannot find device "nvmf_tgt_br" 00:29:16.515 14:11:55 -- nvmf/common.sh@155 -- # true 00:29:16.515 14:11:55 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:16.515 Cannot find device "nvmf_tgt_br2" 00:29:16.515 14:11:55 -- nvmf/common.sh@156 -- # true 00:29:16.515 14:11:56 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:16.515 14:11:56 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:16.515 Cannot find device "nvmf_tgt_br" 00:29:16.515 14:11:56 -- nvmf/common.sh@158 -- # true 00:29:16.515 14:11:56 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:16.515 Cannot find device "nvmf_tgt_br2" 00:29:16.515 14:11:56 -- nvmf/common.sh@159 -- # true 00:29:16.515 14:11:56 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:16.515 14:11:56 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:16.515 14:11:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:16.515 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:16.515 14:11:56 -- nvmf/common.sh@162 -- # true 00:29:16.515 14:11:56 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:16.515 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:16.515 14:11:56 -- nvmf/common.sh@163 -- # true 00:29:16.515 14:11:56 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:16.516 14:11:56 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:16.516 14:11:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:16.516 14:11:56 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:16.516 14:11:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:16.516 14:11:56 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:16.516 14:11:56 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:16.516 14:11:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:16.775 14:11:56 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:16.775 14:11:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:16.775 14:11:56 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:16.775 14:11:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:16.775 14:11:56 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:16.775 14:11:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:16.775 14:11:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:16.775 14:11:56 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:16.775 14:11:56 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:16.775 14:11:56 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:16.775 14:11:56 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:16.775 14:11:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:16.775 14:11:56 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:16.775 14:11:56 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:16.775 14:11:56 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:16.775 14:11:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:16.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:16.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:29:16.775 00:29:16.775 --- 10.0.0.2 ping statistics --- 00:29:16.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.775 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:29:16.775 14:11:56 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:16.775 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:16.775 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:29:16.775 00:29:16.775 --- 10.0.0.3 ping statistics --- 00:29:16.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.775 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:29:16.775 14:11:56 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:16.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:16.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:29:16.775 00:29:16.775 --- 10.0.0.1 ping statistics --- 00:29:16.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.775 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:29:16.775 14:11:56 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:16.775 14:11:56 -- nvmf/common.sh@422 -- # return 0 00:29:16.775 14:11:56 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:16.775 14:11:56 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:16.775 14:11:56 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:16.775 14:11:56 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:16.775 14:11:56 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:16.775 14:11:56 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:16.775 14:11:56 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:16.775 14:11:56 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:16.775 14:11:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:16.775 14:11:56 -- common/autotest_common.sh@10 -- # set +x 00:29:16.775 14:11:56 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:16.775 14:11:56 -- common/autotest_common.sh@1510 -- # bdfs=() 00:29:16.775 14:11:56 -- common/autotest_common.sh@1510 -- # local bdfs 00:29:16.775 14:11:56 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:29:16.775 14:11:56 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:29:16.775 14:11:56 -- common/autotest_common.sh@1499 -- # bdfs=() 00:29:16.775 14:11:56 -- common/autotest_common.sh@1499 -- # local bdfs 00:29:16.775 14:11:56 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:16.775 14:11:56 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:16.775 14:11:56 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:29:16.775 14:11:56 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:29:16.775 14:11:56 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:29:16.775 14:11:56 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:29:16.775 14:11:56 -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:29:16.775 14:11:56 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:29:16.775 14:11:56 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:16.775 14:11:56 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:16.775 14:11:56 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:17.342 14:11:56 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:29:17.342 14:11:56 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:29:17.342 14:11:56 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:17.342 14:11:56 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:17.342 14:11:56 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:29:17.343 14:11:56 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:17.343 14:11:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:17.343 14:11:56 -- common/autotest_common.sh@10 -- # set +x 00:29:17.602 14:11:57 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:17.602 14:11:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:17.602 14:11:57 -- common/autotest_common.sh@10 -- # set +x 00:29:17.602 14:11:57 -- target/identify_passthru.sh@31 -- # nvmfpid=92318 00:29:17.602 14:11:57 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:17.602 14:11:57 -- target/identify_passthru.sh@35 -- # waitforlisten 92318 00:29:17.602 14:11:57 -- common/autotest_common.sh@817 -- # '[' -z 92318 ']' 00:29:17.602 14:11:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.602 14:11:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:17.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.602 14:11:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.602 14:11:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:17.602 14:11:57 -- common/autotest_common.sh@10 -- # set +x 00:29:17.602 14:11:57 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:17.602 [2024-04-26 14:11:57.125784] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:29:17.602 [2024-04-26 14:11:57.125905] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.862 [2024-04-26 14:11:57.299633] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:17.862 [2024-04-26 14:11:57.534782] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.863 [2024-04-26 14:11:57.534841] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.863 [2024-04-26 14:11:57.534857] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.863 [2024-04-26 14:11:57.534868] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.863 [2024-04-26 14:11:57.534881] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.863 [2024-04-26 14:11:57.535139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.863 [2024-04-26 14:11:57.535337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.863 [2024-04-26 14:11:57.536121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.863 [2024-04-26 14:11:57.536189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:18.432 14:11:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:18.432 14:11:57 -- common/autotest_common.sh@850 -- # return 0 00:29:18.432 14:11:57 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:18.432 14:11:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.432 14:11:57 -- common/autotest_common.sh@10 -- # set +x 00:29:18.432 14:11:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.432 14:11:57 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:18.432 14:11:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.432 14:11:57 -- common/autotest_common.sh@10 -- # set +x 00:29:18.691 [2024-04-26 14:11:58.334792] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:18.691 14:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.691 14:11:58 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:18.691 14:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.691 14:11:58 -- common/autotest_common.sh@10 -- # set +x 00:29:18.691 [2024-04-26 14:11:58.349887] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.691 14:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.691 14:11:58 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:18.691 14:11:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:18.691 14:11:58 -- common/autotest_common.sh@10 -- # set +x 00:29:18.950 14:11:58 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:29:18.950 14:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.950 14:11:58 -- common/autotest_common.sh@10 -- # set +x 00:29:18.950 Nvme0n1 00:29:18.950 14:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.950 14:11:58 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:18.950 14:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.950 14:11:58 -- common/autotest_common.sh@10 -- # set +x 00:29:18.950 14:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.950 14:11:58 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:18.950 14:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.950 14:11:58 -- common/autotest_common.sh@10 -- # set +x 00:29:18.950 14:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.950 14:11:58 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:18.950 14:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.950 14:11:58 -- common/autotest_common.sh@10 -- # set +x 00:29:18.950 [2024-04-26 14:11:58.500823] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.950 14:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.950 14:11:58 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:18.950 14:11:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:18.950 14:11:58 -- common/autotest_common.sh@10 -- # set +x 00:29:18.950 [2024-04-26 14:11:58.508542] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:18.950 [ 00:29:18.950 { 00:29:18.950 "allow_any_host": true, 00:29:18.950 "hosts": [], 00:29:18.950 "listen_addresses": [], 00:29:18.950 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:18.950 "subtype": "Discovery" 00:29:18.950 }, 00:29:18.950 { 00:29:18.950 "allow_any_host": true, 00:29:18.950 "hosts": [], 00:29:18.950 "listen_addresses": [ 00:29:18.950 { 00:29:18.950 "adrfam": "IPv4", 00:29:18.950 "traddr": "10.0.0.2", 00:29:18.950 "transport": "TCP", 00:29:18.950 "trsvcid": "4420", 00:29:18.950 "trtype": "TCP" 00:29:18.950 } 00:29:18.950 ], 00:29:18.950 "max_cntlid": 65519, 00:29:18.950 "max_namespaces": 1, 00:29:18.950 "min_cntlid": 1, 00:29:18.950 "model_number": "SPDK bdev Controller", 00:29:18.950 "namespaces": [ 00:29:18.950 { 00:29:18.950 "bdev_name": "Nvme0n1", 00:29:18.950 "name": "Nvme0n1", 00:29:18.950 "nguid": "FE2C3DB630A6487CA17117D3DC60B9AB", 00:29:18.950 "nsid": 1, 00:29:18.950 "uuid": "fe2c3db6-30a6-487c-a171-17d3dc60b9ab" 00:29:18.950 } 00:29:18.950 ], 00:29:18.950 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:18.950 "serial_number": "SPDK00000000000001", 00:29:18.950 "subtype": "NVMe" 00:29:18.950 } 00:29:18.950 ] 00:29:18.950 14:11:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:18.951 14:11:58 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:18.951 14:11:58 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:18.951 14:11:58 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:19.210 14:11:58 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:29:19.210 14:11:58 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:19.210 14:11:58 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:19.210 14:11:58 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:19.469 14:11:59 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:29:19.469 14:11:59 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:29:19.469 14:11:59 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:29:19.469 14:11:59 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:19.469 14:11:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.469 14:11:59 -- common/autotest_common.sh@10 -- # set +x 00:29:19.469 14:11:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.469 14:11:59 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:19.469 14:11:59 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:19.469 14:11:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:19.469 14:11:59 -- nvmf/common.sh@117 -- # sync 00:29:19.728 14:11:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:19.728 14:11:59 -- nvmf/common.sh@120 -- # set +e 00:29:19.728 14:11:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:19.728 14:11:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:19.728 rmmod nvme_tcp 00:29:19.728 rmmod nvme_fabrics 00:29:19.728 rmmod nvme_keyring 00:29:19.728 14:11:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:19.728 14:11:59 -- nvmf/common.sh@124 -- # set -e 00:29:19.728 14:11:59 -- nvmf/common.sh@125 -- # return 0 00:29:19.728 14:11:59 -- nvmf/common.sh@478 -- # '[' -n 92318 ']' 00:29:19.728 14:11:59 -- nvmf/common.sh@479 -- # killprocess 92318 00:29:19.728 14:11:59 -- common/autotest_common.sh@936 -- # '[' -z 92318 ']' 00:29:19.728 14:11:59 -- common/autotest_common.sh@940 -- # kill -0 92318 00:29:19.728 14:11:59 -- common/autotest_common.sh@941 -- # uname 00:29:19.728 14:11:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:19.728 14:11:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92318 00:29:19.728 14:11:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:19.728 14:11:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:19.728 killing process with pid 92318 00:29:19.728 14:11:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92318' 00:29:19.728 14:11:59 -- common/autotest_common.sh@955 -- # kill 92318 00:29:19.728 [2024-04-26 14:11:59.280740] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:29:19.728 14:11:59 -- common/autotest_common.sh@960 -- # wait 92318 00:29:21.107 14:12:00 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:21.107 14:12:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:21.107 14:12:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:21.107 14:12:00 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:21.107 14:12:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:21.107 14:12:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.107 14:12:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:21.107 14:12:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.107 14:12:00 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:21.107 00:29:21.107 real 0m4.854s 00:29:21.107 user 0m11.246s 00:29:21.107 sys 0m1.329s 00:29:21.107 ************************************ 00:29:21.107 END TEST nvmf_identify_passthru 00:29:21.107 ************************************ 00:29:21.107 14:12:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:21.107 14:12:00 -- common/autotest_common.sh@10 -- # set +x 00:29:21.107 14:12:00 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:21.107 14:12:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:21.107 14:12:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:21.107 14:12:00 -- common/autotest_common.sh@10 -- # set +x 00:29:21.107 ************************************ 00:29:21.107 START TEST nvmf_dif 00:29:21.107 ************************************ 00:29:21.107 14:12:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:29:21.366 * Looking for test storage... 00:29:21.366 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:29:21.366 14:12:00 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:21.366 14:12:00 -- nvmf/common.sh@7 -- # uname -s 00:29:21.366 14:12:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.366 14:12:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.366 14:12:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.366 14:12:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.366 14:12:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.366 14:12:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.366 14:12:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.366 14:12:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.366 14:12:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.366 14:12:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.366 14:12:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:29:21.366 14:12:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:29:21.366 14:12:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.366 14:12:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.366 14:12:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:21.366 14:12:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.366 14:12:00 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:21.366 14:12:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.366 14:12:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.366 14:12:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.366 14:12:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.366 14:12:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.366 14:12:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.366 14:12:00 -- paths/export.sh@5 -- # export PATH 00:29:21.366 14:12:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.366 14:12:00 -- nvmf/common.sh@47 -- # : 0 00:29:21.366 14:12:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:21.366 14:12:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:21.366 14:12:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.366 14:12:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.366 14:12:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.366 14:12:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:21.366 14:12:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:21.366 14:12:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:21.366 14:12:00 -- target/dif.sh@15 -- # NULL_META=16 00:29:21.366 14:12:00 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:21.366 14:12:00 -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:21.366 14:12:00 -- target/dif.sh@15 -- # NULL_DIF=1 00:29:21.366 14:12:00 -- target/dif.sh@135 -- # nvmftestinit 00:29:21.366 14:12:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:21.366 14:12:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.366 14:12:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:21.366 14:12:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:21.366 14:12:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:21.366 14:12:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.366 14:12:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:21.366 14:12:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.366 14:12:00 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:29:21.366 14:12:00 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:29:21.366 14:12:00 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:29:21.366 14:12:00 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:29:21.366 14:12:00 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:29:21.366 14:12:00 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:29:21.366 14:12:00 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.366 14:12:00 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:21.366 14:12:00 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:21.366 14:12:00 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:21.366 14:12:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:21.366 14:12:00 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:21.366 14:12:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:21.366 14:12:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.366 14:12:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:21.366 14:12:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:21.366 14:12:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:21.366 14:12:00 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:21.366 14:12:00 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:21.366 14:12:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:21.366 Cannot find device "nvmf_tgt_br" 00:29:21.366 14:12:01 -- nvmf/common.sh@155 -- # true 00:29:21.366 14:12:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:21.366 Cannot find device "nvmf_tgt_br2" 00:29:21.366 14:12:01 -- nvmf/common.sh@156 -- # true 00:29:21.366 14:12:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:21.366 14:12:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:21.626 Cannot find device "nvmf_tgt_br" 00:29:21.626 14:12:01 -- nvmf/common.sh@158 -- # true 00:29:21.626 14:12:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:21.626 Cannot find device "nvmf_tgt_br2" 00:29:21.626 14:12:01 -- nvmf/common.sh@159 -- # true 00:29:21.626 14:12:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:21.626 14:12:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:21.626 14:12:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:21.626 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:21.626 14:12:01 -- nvmf/common.sh@162 -- # true 00:29:21.626 14:12:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:21.626 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:21.626 14:12:01 -- nvmf/common.sh@163 -- # true 00:29:21.626 14:12:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:21.626 14:12:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:21.626 14:12:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:21.626 14:12:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:21.626 14:12:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:21.626 14:12:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:21.626 14:12:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:21.626 14:12:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:21.626 14:12:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:21.626 14:12:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:21.626 14:12:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:21.626 14:12:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:21.626 14:12:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:21.626 14:12:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:21.626 14:12:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:21.884 14:12:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:21.884 14:12:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:21.884 14:12:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:21.884 14:12:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:21.884 14:12:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:21.884 14:12:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:21.884 14:12:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:21.884 14:12:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:21.884 14:12:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:21.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:29:21.884 00:29:21.884 --- 10.0.0.2 ping statistics --- 00:29:21.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.884 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:29:21.884 14:12:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:21.884 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:21.884 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:29:21.884 00:29:21.884 --- 10.0.0.3 ping statistics --- 00:29:21.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.884 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:29:21.884 14:12:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:21.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:29:21.884 00:29:21.884 --- 10.0.0.1 ping statistics --- 00:29:21.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.884 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:29:21.884 14:12:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.884 14:12:01 -- nvmf/common.sh@422 -- # return 0 00:29:21.884 14:12:01 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:29:21.884 14:12:01 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:22.452 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:22.452 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:22.452 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:29:22.452 14:12:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.452 14:12:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:22.452 14:12:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:22.452 14:12:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.452 14:12:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:22.452 14:12:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:22.452 14:12:02 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:22.452 14:12:02 -- target/dif.sh@137 -- # nvmfappstart 00:29:22.452 14:12:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:22.452 14:12:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:22.452 14:12:02 -- common/autotest_common.sh@10 -- # set +x 00:29:22.452 14:12:02 -- nvmf/common.sh@470 -- # nvmfpid=92690 00:29:22.452 14:12:02 -- nvmf/common.sh@471 -- # waitforlisten 92690 00:29:22.452 14:12:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:22.452 14:12:02 -- common/autotest_common.sh@817 -- # '[' -z 92690 ']' 00:29:22.452 14:12:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.452 14:12:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:22.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.452 14:12:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.452 14:12:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:22.452 14:12:02 -- common/autotest_common.sh@10 -- # set +x 00:29:22.452 [2024-04-26 14:12:02.111977] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:29:22.452 [2024-04-26 14:12:02.112094] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.711 [2024-04-26 14:12:02.282219] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.970 [2024-04-26 14:12:02.506532] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.970 [2024-04-26 14:12:02.506586] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.970 [2024-04-26 14:12:02.506602] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.970 [2024-04-26 14:12:02.506624] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.970 [2024-04-26 14:12:02.506638] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.970 [2024-04-26 14:12:02.506676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.543 14:12:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:23.543 14:12:02 -- common/autotest_common.sh@850 -- # return 0 00:29:23.543 14:12:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:23.543 14:12:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:23.543 14:12:02 -- common/autotest_common.sh@10 -- # set +x 00:29:23.543 14:12:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:23.543 14:12:02 -- target/dif.sh@139 -- # create_transport 00:29:23.543 14:12:02 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:23.543 14:12:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.544 14:12:02 -- common/autotest_common.sh@10 -- # set +x 00:29:23.544 [2024-04-26 14:12:02.985906] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.544 14:12:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.544 14:12:02 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:23.544 14:12:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:23.544 14:12:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:23.544 14:12:02 -- common/autotest_common.sh@10 -- # set +x 00:29:23.544 ************************************ 00:29:23.544 START TEST fio_dif_1_default 00:29:23.544 ************************************ 00:29:23.544 14:12:03 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:29:23.544 14:12:03 -- target/dif.sh@86 -- # create_subsystems 0 00:29:23.544 14:12:03 -- target/dif.sh@28 -- # local sub 00:29:23.544 14:12:03 -- target/dif.sh@30 -- # for sub in "$@" 00:29:23.544 14:12:03 -- target/dif.sh@31 -- # create_subsystem 0 00:29:23.544 14:12:03 -- target/dif.sh@18 -- # local sub_id=0 00:29:23.544 14:12:03 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:23.544 14:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.544 14:12:03 -- common/autotest_common.sh@10 -- # set +x 00:29:23.544 bdev_null0 00:29:23.544 14:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.544 14:12:03 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:23.544 14:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.544 14:12:03 -- common/autotest_common.sh@10 -- # set +x 00:29:23.544 14:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.544 14:12:03 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:23.544 14:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.544 14:12:03 -- common/autotest_common.sh@10 -- # set +x 00:29:23.544 14:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.544 14:12:03 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:23.544 14:12:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.544 14:12:03 -- common/autotest_common.sh@10 -- # set +x 00:29:23.544 [2024-04-26 14:12:03.130071] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:23.544 14:12:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.544 14:12:03 -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:23.544 14:12:03 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:23.544 14:12:03 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:23.544 14:12:03 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:23.544 14:12:03 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:23.544 14:12:03 -- target/dif.sh@82 -- # gen_fio_conf 00:29:23.544 14:12:03 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:23.544 14:12:03 -- target/dif.sh@54 -- # local file 00:29:23.544 14:12:03 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:23.544 14:12:03 -- target/dif.sh@56 -- # cat 00:29:23.544 14:12:03 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:23.544 14:12:03 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:23.544 14:12:03 -- common/autotest_common.sh@1327 -- # shift 00:29:23.544 14:12:03 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:23.544 14:12:03 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:23.544 14:12:03 -- nvmf/common.sh@521 -- # config=() 00:29:23.544 14:12:03 -- nvmf/common.sh@521 -- # local subsystem config 00:29:23.544 14:12:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:23.544 14:12:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:23.544 { 00:29:23.544 "params": { 00:29:23.544 "name": "Nvme$subsystem", 00:29:23.544 "trtype": "$TEST_TRANSPORT", 00:29:23.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:23.544 "adrfam": "ipv4", 00:29:23.544 "trsvcid": "$NVMF_PORT", 00:29:23.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:23.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:23.544 "hdgst": ${hdgst:-false}, 00:29:23.544 "ddgst": ${ddgst:-false} 00:29:23.544 }, 00:29:23.544 "method": "bdev_nvme_attach_controller" 00:29:23.544 } 00:29:23.544 EOF 00:29:23.544 )") 00:29:23.544 14:12:03 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:23.544 14:12:03 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:23.544 14:12:03 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:23.544 14:12:03 -- target/dif.sh@72 -- # (( file <= files )) 00:29:23.544 14:12:03 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:23.544 14:12:03 -- nvmf/common.sh@543 -- # cat 00:29:23.544 14:12:03 -- nvmf/common.sh@545 -- # jq . 00:29:23.544 14:12:03 -- nvmf/common.sh@546 -- # IFS=, 00:29:23.544 14:12:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:23.544 "params": { 00:29:23.544 "name": "Nvme0", 00:29:23.544 "trtype": "tcp", 00:29:23.544 "traddr": "10.0.0.2", 00:29:23.544 "adrfam": "ipv4", 00:29:23.544 "trsvcid": "4420", 00:29:23.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:23.544 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:23.544 "hdgst": false, 00:29:23.544 "ddgst": false 00:29:23.544 }, 00:29:23.544 "method": "bdev_nvme_attach_controller" 00:29:23.544 }' 00:29:23.544 14:12:03 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:23.544 14:12:03 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:23.544 14:12:03 -- common/autotest_common.sh@1333 -- # break 00:29:23.544 14:12:03 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:23.544 14:12:03 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:23.802 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:23.802 fio-3.35 00:29:23.802 Starting 1 thread 00:29:36.010 00:29:36.010 filename0: (groupid=0, jobs=1): err= 0: pid=92776: Fri Apr 26 14:12:14 2024 00:29:36.010 read: IOPS=283, BW=1135KiB/s (1162kB/s)(11.1MiB/10009msec) 00:29:36.010 slat (nsec): min=6555, max=67956, avg=8738.98, stdev=4539.38 00:29:36.010 clat (usec): min=402, max=42486, avg=14069.85, stdev=19117.50 00:29:36.010 lat (usec): min=408, max=42495, avg=14078.59, stdev=19117.36 00:29:36.010 clat percentiles (usec): 00:29:36.010 | 1.00th=[ 408], 5.00th=[ 416], 10.00th=[ 424], 20.00th=[ 433], 00:29:36.010 | 30.00th=[ 437], 40.00th=[ 445], 50.00th=[ 453], 60.00th=[ 474], 00:29:36.010 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:36.010 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:29:36.010 | 99.99th=[42730] 00:29:36.010 bw ( KiB/s): min= 704, max= 1472, per=100.00%, avg=1144.95, stdev=236.02, samples=19 00:29:36.010 iops : min= 176, max= 368, avg=286.21, stdev=58.99, samples=19 00:29:36.010 lat (usec) : 500=62.99%, 750=3.20% 00:29:36.010 lat (msec) : 10=0.14%, 50=33.66% 00:29:36.010 cpu : usr=87.65%, sys=11.82%, ctx=32, majf=0, minf=1638 00:29:36.010 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:36.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.010 issued rwts: total=2840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.010 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:36.010 00:29:36.010 Run status group 0 (all jobs): 00:29:36.010 READ: bw=1135KiB/s (1162kB/s), 1135KiB/s-1135KiB/s (1162kB/s-1162kB/s), io=11.1MiB (11.6MB), run=10009-10009msec 00:29:36.010 ----------------------------------------------------- 00:29:36.010 Suppressions used: 00:29:36.010 count bytes template 00:29:36.010 1 8 /usr/src/fio/parse.c 00:29:36.010 1 8 libtcmalloc_minimal.so 00:29:36.010 1 904 libcrypto.so 00:29:36.010 ----------------------------------------------------- 00:29:36.010 00:29:36.010 14:12:15 -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:36.010 14:12:15 -- target/dif.sh@43 -- # local sub 00:29:36.010 14:12:15 -- target/dif.sh@45 -- # for sub in "$@" 00:29:36.010 14:12:15 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:36.010 14:12:15 -- target/dif.sh@36 -- # local sub_id=0 00:29:36.011 14:12:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:36.011 14:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.011 14:12:15 -- common/autotest_common.sh@10 -- # set +x 00:29:36.011 14:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.011 14:12:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:36.011 14:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.011 14:12:15 -- common/autotest_common.sh@10 -- # set +x 00:29:36.011 14:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.011 00:29:36.011 real 0m12.419s 00:29:36.011 user 0m10.750s 00:29:36.011 sys 0m1.579s 00:29:36.011 14:12:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:36.011 ************************************ 00:29:36.011 END TEST fio_dif_1_default 00:29:36.011 14:12:15 -- common/autotest_common.sh@10 -- # set +x 00:29:36.011 ************************************ 00:29:36.011 14:12:15 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:36.011 14:12:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:36.011 14:12:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:36.011 14:12:15 -- common/autotest_common.sh@10 -- # set +x 00:29:36.011 ************************************ 00:29:36.011 START TEST fio_dif_1_multi_subsystems 00:29:36.011 ************************************ 00:29:36.011 14:12:15 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:29:36.011 14:12:15 -- target/dif.sh@92 -- # local files=1 00:29:36.011 14:12:15 -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:36.011 14:12:15 -- target/dif.sh@28 -- # local sub 00:29:36.011 14:12:15 -- target/dif.sh@30 -- # for sub in "$@" 00:29:36.011 14:12:15 -- target/dif.sh@31 -- # create_subsystem 0 00:29:36.011 14:12:15 -- target/dif.sh@18 -- # local sub_id=0 00:29:36.011 14:12:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:36.011 14:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.011 14:12:15 -- common/autotest_common.sh@10 -- # set +x 00:29:36.011 bdev_null0 00:29:36.011 14:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.011 14:12:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:36.011 14:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.011 14:12:15 -- common/autotest_common.sh@10 -- # set +x 00:29:36.270 14:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.270 14:12:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:36.270 14:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.270 14:12:15 -- common/autotest_common.sh@10 -- # set +x 00:29:36.270 14:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.270 14:12:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:36.270 14:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.270 14:12:15 -- common/autotest_common.sh@10 -- # set +x 00:29:36.270 [2024-04-26 14:12:15.713467] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.270 14:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.270 14:12:15 -- target/dif.sh@30 -- # for sub in "$@" 00:29:36.270 14:12:15 -- target/dif.sh@31 -- # create_subsystem 1 00:29:36.270 14:12:15 -- target/dif.sh@18 -- # local sub_id=1 00:29:36.270 14:12:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:36.270 14:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.270 14:12:15 -- common/autotest_common.sh@10 -- # set +x 00:29:36.270 bdev_null1 00:29:36.270 14:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.270 14:12:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:36.270 14:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.270 14:12:15 -- common/autotest_common.sh@10 -- # set +x 00:29:36.270 14:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.270 14:12:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:36.270 14:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.270 14:12:15 -- common/autotest_common.sh@10 -- # set +x 00:29:36.270 14:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.270 14:12:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:36.270 14:12:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:36.270 14:12:15 -- common/autotest_common.sh@10 -- # set +x 00:29:36.270 14:12:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:36.270 14:12:15 -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:36.270 14:12:15 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:36.270 14:12:15 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:36.270 14:12:15 -- nvmf/common.sh@521 -- # config=() 00:29:36.270 14:12:15 -- nvmf/common.sh@521 -- # local subsystem config 00:29:36.270 14:12:15 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:36.270 14:12:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:36.270 14:12:15 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:36.270 14:12:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:36.270 { 00:29:36.270 "params": { 00:29:36.270 "name": "Nvme$subsystem", 00:29:36.270 "trtype": "$TEST_TRANSPORT", 00:29:36.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.270 "adrfam": "ipv4", 00:29:36.270 "trsvcid": "$NVMF_PORT", 00:29:36.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.270 "hdgst": ${hdgst:-false}, 00:29:36.270 "ddgst": ${ddgst:-false} 00:29:36.270 }, 00:29:36.270 "method": "bdev_nvme_attach_controller" 00:29:36.270 } 00:29:36.270 EOF 00:29:36.270 )") 00:29:36.270 14:12:15 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:36.270 14:12:15 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:36.270 14:12:15 -- target/dif.sh@82 -- # gen_fio_conf 00:29:36.270 14:12:15 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:36.270 14:12:15 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:36.270 14:12:15 -- common/autotest_common.sh@1327 -- # shift 00:29:36.270 14:12:15 -- target/dif.sh@54 -- # local file 00:29:36.270 14:12:15 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:36.270 14:12:15 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:36.270 14:12:15 -- target/dif.sh@56 -- # cat 00:29:36.270 14:12:15 -- nvmf/common.sh@543 -- # cat 00:29:36.270 14:12:15 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:36.270 14:12:15 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:36.270 14:12:15 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:36.270 14:12:15 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:36.270 14:12:15 -- target/dif.sh@72 -- # (( file <= files )) 00:29:36.270 14:12:15 -- target/dif.sh@73 -- # cat 00:29:36.270 14:12:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:36.270 14:12:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:36.270 { 00:29:36.270 "params": { 00:29:36.270 "name": "Nvme$subsystem", 00:29:36.270 "trtype": "$TEST_TRANSPORT", 00:29:36.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:36.270 "adrfam": "ipv4", 00:29:36.270 "trsvcid": "$NVMF_PORT", 00:29:36.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:36.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:36.270 "hdgst": ${hdgst:-false}, 00:29:36.270 "ddgst": ${ddgst:-false} 00:29:36.270 }, 00:29:36.270 "method": "bdev_nvme_attach_controller" 00:29:36.270 } 00:29:36.270 EOF 00:29:36.270 )") 00:29:36.270 14:12:15 -- target/dif.sh@72 -- # (( file++ )) 00:29:36.270 14:12:15 -- target/dif.sh@72 -- # (( file <= files )) 00:29:36.270 14:12:15 -- nvmf/common.sh@543 -- # cat 00:29:36.270 14:12:15 -- nvmf/common.sh@545 -- # jq . 00:29:36.270 14:12:15 -- nvmf/common.sh@546 -- # IFS=, 00:29:36.270 14:12:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:36.270 "params": { 00:29:36.270 "name": "Nvme0", 00:29:36.270 "trtype": "tcp", 00:29:36.270 "traddr": "10.0.0.2", 00:29:36.270 "adrfam": "ipv4", 00:29:36.270 "trsvcid": "4420", 00:29:36.270 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:36.270 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:36.270 "hdgst": false, 00:29:36.270 "ddgst": false 00:29:36.270 }, 00:29:36.270 "method": "bdev_nvme_attach_controller" 00:29:36.270 },{ 00:29:36.270 "params": { 00:29:36.270 "name": "Nvme1", 00:29:36.270 "trtype": "tcp", 00:29:36.270 "traddr": "10.0.0.2", 00:29:36.270 "adrfam": "ipv4", 00:29:36.270 "trsvcid": "4420", 00:29:36.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:36.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:36.270 "hdgst": false, 00:29:36.270 "ddgst": false 00:29:36.270 }, 00:29:36.270 "method": "bdev_nvme_attach_controller" 00:29:36.270 }' 00:29:36.270 14:12:15 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:36.270 14:12:15 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:36.270 14:12:15 -- common/autotest_common.sh@1333 -- # break 00:29:36.270 14:12:15 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:36.270 14:12:15 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:36.530 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:36.530 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:36.530 fio-3.35 00:29:36.530 Starting 2 threads 00:29:48.740 00:29:48.740 filename0: (groupid=0, jobs=1): err= 0: pid=92947: Fri Apr 26 14:12:27 2024 00:29:48.740 read: IOPS=153, BW=614KiB/s (629kB/s)(6144KiB/10002msec) 00:29:48.740 slat (nsec): min=6443, max=77602, avg=12694.42, stdev=10003.08 00:29:48.740 clat (usec): min=418, max=42077, avg=26003.43, stdev=19596.28 00:29:48.740 lat (usec): min=426, max=42118, avg=26016.13, stdev=19595.94 00:29:48.740 clat percentiles (usec): 00:29:48.740 | 1.00th=[ 437], 5.00th=[ 449], 10.00th=[ 461], 20.00th=[ 502], 00:29:48.740 | 30.00th=[ 775], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:29:48.740 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:29:48.740 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:48.740 | 99.99th=[42206] 00:29:48.740 bw ( KiB/s): min= 480, max= 864, per=51.32%, avg=619.79, stdev=95.53, samples=19 00:29:48.740 iops : min= 120, max= 216, avg=154.95, stdev=23.88, samples=19 00:29:48.740 lat (usec) : 500=19.60%, 750=10.09%, 1000=7.03% 00:29:48.740 lat (msec) : 2=0.26%, 4=0.26%, 50=62.76% 00:29:48.740 cpu : usr=92.91%, sys=6.60%, ctx=14, majf=0, minf=1638 00:29:48.740 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:48.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.740 issued rwts: total=1536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.740 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:48.740 filename1: (groupid=0, jobs=1): err= 0: pid=92948: Fri Apr 26 14:12:27 2024 00:29:48.740 read: IOPS=147, BW=592KiB/s (606kB/s)(5920KiB/10002msec) 00:29:48.740 slat (nsec): min=6762, max=59794, avg=12743.25, stdev=9282.18 00:29:48.740 clat (usec): min=429, max=43016, avg=26989.26, stdev=19349.58 00:29:48.740 lat (usec): min=436, max=43044, avg=27002.00, stdev=19349.89 00:29:48.740 clat percentiles (usec): 00:29:48.740 | 1.00th=[ 441], 5.00th=[ 453], 10.00th=[ 465], 20.00th=[ 498], 00:29:48.740 | 30.00th=[ 685], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:29:48.740 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:29:48.740 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:29:48.740 | 99.99th=[43254] 00:29:48.740 bw ( KiB/s): min= 448, max= 864, per=49.41%, avg=596.21, stdev=113.99, samples=19 00:29:48.740 iops : min= 112, max= 216, avg=149.05, stdev=28.50, samples=19 00:29:48.740 lat (usec) : 500=20.14%, 750=10.20%, 1000=3.99% 00:29:48.740 lat (msec) : 2=0.27%, 4=0.27%, 50=65.14% 00:29:48.740 cpu : usr=93.18%, sys=6.29%, ctx=37, majf=0, minf=1635 00:29:48.740 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:48.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.740 issued rwts: total=1480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.740 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:48.740 00:29:48.740 Run status group 0 (all jobs): 00:29:48.740 READ: bw=1206KiB/s (1235kB/s), 592KiB/s-614KiB/s (606kB/s-629kB/s), io=11.8MiB (12.4MB), run=10002-10002msec 00:29:48.740 ----------------------------------------------------- 00:29:48.740 Suppressions used: 00:29:48.740 count bytes template 00:29:48.740 2 16 /usr/src/fio/parse.c 00:29:48.740 1 8 libtcmalloc_minimal.so 00:29:48.740 1 904 libcrypto.so 00:29:48.740 ----------------------------------------------------- 00:29:48.740 00:29:48.740 14:12:28 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:48.740 14:12:28 -- target/dif.sh@43 -- # local sub 00:29:48.740 14:12:28 -- target/dif.sh@45 -- # for sub in "$@" 00:29:48.740 14:12:28 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:48.740 14:12:28 -- target/dif.sh@36 -- # local sub_id=0 00:29:48.740 14:12:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:48.740 14:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.740 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:29:48.740 14:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.740 14:12:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:48.740 14:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.740 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:29:48.999 14:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.999 14:12:28 -- target/dif.sh@45 -- # for sub in "$@" 00:29:48.999 14:12:28 -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:48.999 14:12:28 -- target/dif.sh@36 -- # local sub_id=1 00:29:48.999 14:12:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:48.999 14:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.999 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:29:48.999 14:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.999 14:12:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:48.999 14:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.999 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:29:48.999 14:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.999 00:29:48.999 real 0m12.771s 00:29:48.999 user 0m20.877s 00:29:48.999 sys 0m1.741s 00:29:48.999 14:12:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:48.999 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:29:48.999 ************************************ 00:29:48.999 END TEST fio_dif_1_multi_subsystems 00:29:48.999 ************************************ 00:29:48.999 14:12:28 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:48.999 14:12:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:48.999 14:12:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:48.999 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:29:48.999 ************************************ 00:29:48.999 START TEST fio_dif_rand_params 00:29:48.999 ************************************ 00:29:48.999 14:12:28 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:29:48.999 14:12:28 -- target/dif.sh@100 -- # local NULL_DIF 00:29:48.999 14:12:28 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:48.999 14:12:28 -- target/dif.sh@103 -- # NULL_DIF=3 00:29:48.999 14:12:28 -- target/dif.sh@103 -- # bs=128k 00:29:48.999 14:12:28 -- target/dif.sh@103 -- # numjobs=3 00:29:48.999 14:12:28 -- target/dif.sh@103 -- # iodepth=3 00:29:48.999 14:12:28 -- target/dif.sh@103 -- # runtime=5 00:29:48.999 14:12:28 -- target/dif.sh@105 -- # create_subsystems 0 00:29:48.999 14:12:28 -- target/dif.sh@28 -- # local sub 00:29:48.999 14:12:28 -- target/dif.sh@30 -- # for sub in "$@" 00:29:48.999 14:12:28 -- target/dif.sh@31 -- # create_subsystem 0 00:29:48.999 14:12:28 -- target/dif.sh@18 -- # local sub_id=0 00:29:48.999 14:12:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:48.999 14:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.999 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:29:48.999 bdev_null0 00:29:48.999 14:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.999 14:12:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:48.999 14:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.999 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:29:48.999 14:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.999 14:12:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:48.999 14:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.999 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:29:48.999 14:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.999 14:12:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:48.999 14:12:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:48.999 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:29:48.999 [2024-04-26 14:12:28.650504] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.999 14:12:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:48.999 14:12:28 -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:48.999 14:12:28 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:48.999 14:12:28 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:48.999 14:12:28 -- nvmf/common.sh@521 -- # config=() 00:29:48.999 14:12:28 -- nvmf/common.sh@521 -- # local subsystem config 00:29:48.999 14:12:28 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:48.999 14:12:28 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:48.999 { 00:29:48.999 "params": { 00:29:48.999 "name": "Nvme$subsystem", 00:29:48.999 "trtype": "$TEST_TRANSPORT", 00:29:48.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.999 "adrfam": "ipv4", 00:29:48.999 "trsvcid": "$NVMF_PORT", 00:29:48.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.999 "hdgst": ${hdgst:-false}, 00:29:48.999 "ddgst": ${ddgst:-false} 00:29:48.999 }, 00:29:48.999 "method": "bdev_nvme_attach_controller" 00:29:48.999 } 00:29:48.999 EOF 00:29:48.999 )") 00:29:48.999 14:12:28 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:48.999 14:12:28 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:48.999 14:12:28 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:48.999 14:12:28 -- nvmf/common.sh@543 -- # cat 00:29:48.999 14:12:28 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:48.999 14:12:28 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:48.999 14:12:28 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:48.999 14:12:28 -- common/autotest_common.sh@1327 -- # shift 00:29:48.999 14:12:28 -- target/dif.sh@82 -- # gen_fio_conf 00:29:48.999 14:12:28 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:48.999 14:12:28 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:48.999 14:12:28 -- target/dif.sh@54 -- # local file 00:29:48.999 14:12:28 -- target/dif.sh@56 -- # cat 00:29:48.999 14:12:28 -- nvmf/common.sh@545 -- # jq . 00:29:48.999 14:12:28 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:48.999 14:12:28 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:48.999 14:12:28 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:48.999 14:12:28 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:48.999 14:12:28 -- nvmf/common.sh@546 -- # IFS=, 00:29:48.999 14:12:28 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:48.999 "params": { 00:29:48.999 "name": "Nvme0", 00:29:48.999 "trtype": "tcp", 00:29:48.999 "traddr": "10.0.0.2", 00:29:49.000 "adrfam": "ipv4", 00:29:49.000 "trsvcid": "4420", 00:29:49.000 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:49.000 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:49.000 "hdgst": false, 00:29:49.000 "ddgst": false 00:29:49.000 }, 00:29:49.000 "method": "bdev_nvme_attach_controller" 00:29:49.000 }' 00:29:49.000 14:12:28 -- target/dif.sh@72 -- # (( file <= files )) 00:29:49.258 14:12:28 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:49.258 14:12:28 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:49.258 14:12:28 -- common/autotest_common.sh@1333 -- # break 00:29:49.258 14:12:28 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:49.258 14:12:28 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:49.258 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:49.258 ... 00:29:49.258 fio-3.35 00:29:49.258 Starting 3 threads 00:29:55.890 00:29:55.890 filename0: (groupid=0, jobs=1): err= 0: pid=93119: Fri Apr 26 14:12:34 2024 00:29:55.890 read: IOPS=279, BW=35.0MiB/s (36.7MB/s)(175MiB/5004msec) 00:29:55.890 slat (nsec): min=7029, max=57911, avg=14261.51, stdev=3710.65 00:29:55.890 clat (usec): min=4903, max=52366, avg=10702.55, stdev=6581.65 00:29:55.890 lat (usec): min=4914, max=52374, avg=10716.81, stdev=6581.39 00:29:55.890 clat percentiles (usec): 00:29:55.890 | 1.00th=[ 5997], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[ 9110], 00:29:55.890 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:29:55.890 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10683], 95.00th=[11207], 00:29:55.890 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52167], 99.95th=[52167], 00:29:55.890 | 99.99th=[52167] 00:29:55.890 bw ( KiB/s): min=26880, max=40704, per=34.64%, avg=35236.33, stdev=5095.88, samples=9 00:29:55.890 iops : min= 210, max= 318, avg=275.22, stdev=39.91, samples=9 00:29:55.890 lat (msec) : 10=58.86%, 20=38.57%, 50=0.50%, 100=2.07% 00:29:55.890 cpu : usr=91.17%, sys=7.50%, ctx=11, majf=0, minf=1638 00:29:55.890 IO depths : 1=1.3%, 2=98.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:55.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.890 issued rwts: total=1400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:55.890 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:55.890 filename0: (groupid=0, jobs=1): err= 0: pid=93120: Fri Apr 26 14:12:34 2024 00:29:55.890 read: IOPS=242, BW=30.3MiB/s (31.8MB/s)(152MiB/5006msec) 00:29:55.890 slat (nsec): min=6788, max=33146, avg=11054.36, stdev=5003.25 00:29:55.890 clat (usec): min=3851, max=15468, avg=12331.33, stdev=2279.94 00:29:55.891 lat (usec): min=3858, max=15475, avg=12342.38, stdev=2280.28 00:29:55.891 clat percentiles (usec): 00:29:55.891 | 1.00th=[ 3884], 5.00th=[ 8094], 10.00th=[ 8455], 20.00th=[10814], 00:29:55.891 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:29:55.891 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14091], 95.00th=[14353], 00:29:55.891 | 99.00th=[15008], 99.50th=[15008], 99.90th=[15139], 99.95th=[15533], 00:29:55.891 | 99.99th=[15533] 00:29:55.891 bw ( KiB/s): min=27648, max=36096, per=30.62%, avg=31146.67, stdev=2876.44, samples=9 00:29:55.891 iops : min= 216, max= 282, avg=243.33, stdev=22.47, samples=9 00:29:55.891 lat (msec) : 4=1.98%, 10=16.71%, 20=81.32% 00:29:55.891 cpu : usr=91.49%, sys=7.25%, ctx=13, majf=0, minf=1637 00:29:55.891 IO depths : 1=33.1%, 2=66.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:55.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.891 issued rwts: total=1215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:55.891 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:55.891 filename0: (groupid=0, jobs=1): err= 0: pid=93121: Fri Apr 26 14:12:34 2024 00:29:55.891 read: IOPS=272, BW=34.0MiB/s (35.7MB/s)(170MiB/5006msec) 00:29:55.891 slat (nsec): min=6949, max=37747, avg=13494.26, stdev=4718.29 00:29:55.891 clat (usec): min=5409, max=53273, avg=10996.12, stdev=4598.09 00:29:55.891 lat (usec): min=5422, max=53289, avg=11009.61, stdev=4598.09 00:29:55.891 clat percentiles (usec): 00:29:55.891 | 1.00th=[ 5932], 5.00th=[ 6783], 10.00th=[ 7111], 20.00th=[ 9896], 00:29:55.891 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:29:55.891 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12256], 95.00th=[12649], 00:29:55.891 | 99.00th=[49021], 99.50th=[51643], 99.90th=[53216], 99.95th=[53216], 00:29:55.891 | 99.99th=[53216] 00:29:55.891 bw ( KiB/s): min=29952, max=41472, per=34.70%, avg=35291.00, stdev=3785.95, samples=9 00:29:55.891 iops : min= 234, max= 324, avg=275.67, stdev=29.54, samples=9 00:29:55.891 lat (msec) : 10=22.82%, 20=76.08%, 50=0.22%, 100=0.88% 00:29:55.891 cpu : usr=90.85%, sys=7.81%, ctx=18, majf=0, minf=1635 00:29:55.891 IO depths : 1=6.4%, 2=93.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:55.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.891 issued rwts: total=1363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:55.891 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:55.891 00:29:55.891 Run status group 0 (all jobs): 00:29:55.891 READ: bw=99.3MiB/s (104MB/s), 30.3MiB/s-35.0MiB/s (31.8MB/s-36.7MB/s), io=497MiB (521MB), run=5004-5006msec 00:29:56.458 ----------------------------------------------------- 00:29:56.458 Suppressions used: 00:29:56.458 count bytes template 00:29:56.458 5 44 /usr/src/fio/parse.c 00:29:56.458 1 8 libtcmalloc_minimal.so 00:29:56.458 1 904 libcrypto.so 00:29:56.458 ----------------------------------------------------- 00:29:56.458 00:29:56.458 14:12:36 -- target/dif.sh@107 -- # destroy_subsystems 0 00:29:56.458 14:12:36 -- target/dif.sh@43 -- # local sub 00:29:56.458 14:12:36 -- target/dif.sh@45 -- # for sub in "$@" 00:29:56.458 14:12:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:56.458 14:12:36 -- target/dif.sh@36 -- # local sub_id=0 00:29:56.458 14:12:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:56.458 14:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.458 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:56.458 14:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.458 14:12:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:56.458 14:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.458 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:56.458 14:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.458 14:12:36 -- target/dif.sh@109 -- # NULL_DIF=2 00:29:56.458 14:12:36 -- target/dif.sh@109 -- # bs=4k 00:29:56.458 14:12:36 -- target/dif.sh@109 -- # numjobs=8 00:29:56.458 14:12:36 -- target/dif.sh@109 -- # iodepth=16 00:29:56.458 14:12:36 -- target/dif.sh@109 -- # runtime= 00:29:56.458 14:12:36 -- target/dif.sh@109 -- # files=2 00:29:56.458 14:12:36 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:29:56.458 14:12:36 -- target/dif.sh@28 -- # local sub 00:29:56.458 14:12:36 -- target/dif.sh@30 -- # for sub in "$@" 00:29:56.458 14:12:36 -- target/dif.sh@31 -- # create_subsystem 0 00:29:56.458 14:12:36 -- target/dif.sh@18 -- # local sub_id=0 00:29:56.458 14:12:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:29:56.458 14:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.458 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:56.458 bdev_null0 00:29:56.458 14:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.458 14:12:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:56.458 14:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.458 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:56.458 14:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.458 14:12:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:56.458 14:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.458 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:56.458 14:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.458 14:12:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:56.458 14:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.458 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:56.458 [2024-04-26 14:12:36.088106] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.458 14:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.458 14:12:36 -- target/dif.sh@30 -- # for sub in "$@" 00:29:56.458 14:12:36 -- target/dif.sh@31 -- # create_subsystem 1 00:29:56.458 14:12:36 -- target/dif.sh@18 -- # local sub_id=1 00:29:56.458 14:12:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:29:56.458 14:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.458 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:56.458 bdev_null1 00:29:56.458 14:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.458 14:12:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:56.458 14:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.458 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:56.458 14:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.458 14:12:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:56.458 14:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.458 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:56.458 14:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.458 14:12:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.458 14:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.458 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:56.717 14:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.717 14:12:36 -- target/dif.sh@30 -- # for sub in "$@" 00:29:56.717 14:12:36 -- target/dif.sh@31 -- # create_subsystem 2 00:29:56.717 14:12:36 -- target/dif.sh@18 -- # local sub_id=2 00:29:56.717 14:12:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:29:56.717 14:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.717 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:56.717 bdev_null2 00:29:56.717 14:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.717 14:12:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:29:56.717 14:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.717 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:56.717 14:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.717 14:12:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:29:56.717 14:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.717 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:56.717 14:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.717 14:12:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:56.717 14:12:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:56.717 14:12:36 -- common/autotest_common.sh@10 -- # set +x 00:29:56.717 14:12:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:56.717 14:12:36 -- target/dif.sh@112 -- # fio /dev/fd/62 00:29:56.717 14:12:36 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:29:56.717 14:12:36 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:29:56.717 14:12:36 -- nvmf/common.sh@521 -- # config=() 00:29:56.717 14:12:36 -- nvmf/common.sh@521 -- # local subsystem config 00:29:56.717 14:12:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:56.717 14:12:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:56.717 { 00:29:56.717 "params": { 00:29:56.717 "name": "Nvme$subsystem", 00:29:56.717 "trtype": "$TEST_TRANSPORT", 00:29:56.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:56.717 "adrfam": "ipv4", 00:29:56.717 "trsvcid": "$NVMF_PORT", 00:29:56.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:56.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:56.717 "hdgst": ${hdgst:-false}, 00:29:56.717 "ddgst": ${ddgst:-false} 00:29:56.717 }, 00:29:56.717 "method": "bdev_nvme_attach_controller" 00:29:56.717 } 00:29:56.717 EOF 00:29:56.717 )") 00:29:56.717 14:12:36 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:56.717 14:12:36 -- target/dif.sh@82 -- # gen_fio_conf 00:29:56.717 14:12:36 -- target/dif.sh@54 -- # local file 00:29:56.717 14:12:36 -- target/dif.sh@56 -- # cat 00:29:56.717 14:12:36 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:56.717 14:12:36 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:29:56.717 14:12:36 -- nvmf/common.sh@543 -- # cat 00:29:56.717 14:12:36 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:56.717 14:12:36 -- common/autotest_common.sh@1325 -- # local sanitizers 00:29:56.717 14:12:36 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:56.717 14:12:36 -- common/autotest_common.sh@1327 -- # shift 00:29:56.717 14:12:36 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:29:56.717 14:12:36 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:29:56.717 14:12:36 -- target/dif.sh@72 -- # (( file = 1 )) 00:29:56.717 14:12:36 -- target/dif.sh@72 -- # (( file <= files )) 00:29:56.717 14:12:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:56.717 14:12:36 -- target/dif.sh@73 -- # cat 00:29:56.717 14:12:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:56.717 { 00:29:56.717 "params": { 00:29:56.717 "name": "Nvme$subsystem", 00:29:56.717 "trtype": "$TEST_TRANSPORT", 00:29:56.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:56.717 "adrfam": "ipv4", 00:29:56.717 "trsvcid": "$NVMF_PORT", 00:29:56.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:56.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:56.717 "hdgst": ${hdgst:-false}, 00:29:56.717 "ddgst": ${ddgst:-false} 00:29:56.717 }, 00:29:56.717 "method": "bdev_nvme_attach_controller" 00:29:56.717 } 00:29:56.717 EOF 00:29:56.717 )") 00:29:56.717 14:12:36 -- common/autotest_common.sh@1331 -- # grep libasan 00:29:56.717 14:12:36 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:56.718 14:12:36 -- nvmf/common.sh@543 -- # cat 00:29:56.718 14:12:36 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:29:56.718 14:12:36 -- target/dif.sh@72 -- # (( file++ )) 00:29:56.718 14:12:36 -- target/dif.sh@72 -- # (( file <= files )) 00:29:56.718 14:12:36 -- target/dif.sh@73 -- # cat 00:29:56.718 14:12:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:29:56.718 14:12:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:29:56.718 { 00:29:56.718 "params": { 00:29:56.718 "name": "Nvme$subsystem", 00:29:56.718 "trtype": "$TEST_TRANSPORT", 00:29:56.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:56.718 "adrfam": "ipv4", 00:29:56.718 "trsvcid": "$NVMF_PORT", 00:29:56.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:56.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:56.718 "hdgst": ${hdgst:-false}, 00:29:56.718 "ddgst": ${ddgst:-false} 00:29:56.718 }, 00:29:56.718 "method": "bdev_nvme_attach_controller" 00:29:56.718 } 00:29:56.718 EOF 00:29:56.718 )") 00:29:56.718 14:12:36 -- nvmf/common.sh@543 -- # cat 00:29:56.718 14:12:36 -- target/dif.sh@72 -- # (( file++ )) 00:29:56.718 14:12:36 -- target/dif.sh@72 -- # (( file <= files )) 00:29:56.718 14:12:36 -- nvmf/common.sh@545 -- # jq . 00:29:56.718 14:12:36 -- nvmf/common.sh@546 -- # IFS=, 00:29:56.718 14:12:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:29:56.718 "params": { 00:29:56.718 "name": "Nvme0", 00:29:56.718 "trtype": "tcp", 00:29:56.718 "traddr": "10.0.0.2", 00:29:56.718 "adrfam": "ipv4", 00:29:56.718 "trsvcid": "4420", 00:29:56.718 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:56.718 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:56.718 "hdgst": false, 00:29:56.718 "ddgst": false 00:29:56.718 }, 00:29:56.718 "method": "bdev_nvme_attach_controller" 00:29:56.718 },{ 00:29:56.718 "params": { 00:29:56.718 "name": "Nvme1", 00:29:56.718 "trtype": "tcp", 00:29:56.718 "traddr": "10.0.0.2", 00:29:56.718 "adrfam": "ipv4", 00:29:56.718 "trsvcid": "4420", 00:29:56.718 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:56.718 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:56.718 "hdgst": false, 00:29:56.718 "ddgst": false 00:29:56.718 }, 00:29:56.718 "method": "bdev_nvme_attach_controller" 00:29:56.718 },{ 00:29:56.718 "params": { 00:29:56.718 "name": "Nvme2", 00:29:56.718 "trtype": "tcp", 00:29:56.718 "traddr": "10.0.0.2", 00:29:56.718 "adrfam": "ipv4", 00:29:56.718 "trsvcid": "4420", 00:29:56.718 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:56.718 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:56.718 "hdgst": false, 00:29:56.718 "ddgst": false 00:29:56.718 }, 00:29:56.718 "method": "bdev_nvme_attach_controller" 00:29:56.718 }' 00:29:56.718 14:12:36 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:29:56.718 14:12:36 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:29:56.718 14:12:36 -- common/autotest_common.sh@1333 -- # break 00:29:56.718 14:12:36 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:56.718 14:12:36 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:56.975 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:56.975 ... 00:29:56.975 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:56.975 ... 00:29:56.975 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:29:56.975 ... 00:29:56.975 fio-3.35 00:29:56.975 Starting 24 threads 00:30:09.228 00:30:09.228 filename0: (groupid=0, jobs=1): err= 0: pid=93225: Fri Apr 26 14:12:47 2024 00:30:09.228 read: IOPS=261, BW=1048KiB/s (1073kB/s)(10.2MiB/10017msec) 00:30:09.228 slat (usec): min=4, max=8032, avg=13.92, stdev=156.66 00:30:09.228 clat (msec): min=2, max=141, avg=60.95, stdev=21.15 00:30:09.228 lat (msec): min=2, max=141, avg=60.97, stdev=21.15 00:30:09.228 clat percentiles (msec): 00:30:09.228 | 1.00th=[ 4], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 46], 00:30:09.228 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 67], 00:30:09.228 | 70.00th=[ 72], 80.00th=[ 79], 90.00th=[ 87], 95.00th=[ 94], 00:30:09.228 | 99.00th=[ 114], 99.50th=[ 132], 99.90th=[ 142], 99.95th=[ 142], 00:30:09.228 | 99.99th=[ 142] 00:30:09.228 bw ( KiB/s): min= 768, max= 1840, per=4.63%, avg=1045.60, stdev=248.10, samples=20 00:30:09.228 iops : min= 192, max= 460, avg=261.40, stdev=62.03, samples=20 00:30:09.228 lat (msec) : 4=1.22%, 10=1.83%, 20=0.61%, 50=30.26%, 100=63.22% 00:30:09.228 lat (msec) : 250=2.86% 00:30:09.228 cpu : usr=33.91%, sys=1.62%, ctx=933, majf=0, minf=1635 00:30:09.228 IO depths : 1=0.6%, 2=1.3%, 4=9.1%, 8=76.1%, 16=12.8%, 32=0.0%, >=64=0.0% 00:30:09.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.228 complete : 0=0.0%, 4=89.6%, 8=5.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.228 issued rwts: total=2624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.228 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.228 filename0: (groupid=0, jobs=1): err= 0: pid=93226: Fri Apr 26 14:12:47 2024 00:30:09.228 read: IOPS=227, BW=911KiB/s (933kB/s)(9156KiB/10051msec) 00:30:09.228 slat (usec): min=3, max=8030, avg=14.86, stdev=167.68 00:30:09.228 clat (msec): min=29, max=141, avg=70.05, stdev=21.41 00:30:09.228 lat (msec): min=29, max=141, avg=70.07, stdev=21.41 00:30:09.228 clat percentiles (msec): 00:30:09.228 | 1.00th=[ 35], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 51], 00:30:09.228 | 30.00th=[ 59], 40.00th=[ 64], 50.00th=[ 70], 60.00th=[ 71], 00:30:09.228 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 116], 00:30:09.228 | 99.00th=[ 133], 99.50th=[ 142], 99.90th=[ 142], 99.95th=[ 142], 00:30:09.228 | 99.99th=[ 142] 00:30:09.228 bw ( KiB/s): min= 640, max= 1168, per=4.03%, avg=910.80, stdev=145.17, samples=20 00:30:09.228 iops : min= 160, max= 292, avg=227.70, stdev=36.29, samples=20 00:30:09.228 lat (msec) : 50=20.14%, 100=69.90%, 250=9.96% 00:30:09.228 cpu : usr=33.30%, sys=1.47%, ctx=952, majf=0, minf=1636 00:30:09.228 IO depths : 1=1.7%, 2=3.6%, 4=11.7%, 8=71.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:30:09.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.228 complete : 0=0.0%, 4=90.4%, 8=4.9%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.228 issued rwts: total=2289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.228 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.229 filename0: (groupid=0, jobs=1): err= 0: pid=93227: Fri Apr 26 14:12:47 2024 00:30:09.229 read: IOPS=221, BW=885KiB/s (906kB/s)(8860KiB/10015msec) 00:30:09.229 slat (usec): min=3, max=8033, avg=20.07, stdev=255.47 00:30:09.229 clat (msec): min=28, max=144, avg=72.17, stdev=18.76 00:30:09.229 lat (msec): min=28, max=144, avg=72.19, stdev=18.76 00:30:09.229 clat percentiles (msec): 00:30:09.229 | 1.00th=[ 37], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 59], 00:30:09.229 | 30.00th=[ 64], 40.00th=[ 67], 50.00th=[ 70], 60.00th=[ 73], 00:30:09.229 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 111], 00:30:09.229 | 99.00th=[ 130], 99.50th=[ 133], 99.90th=[ 138], 99.95th=[ 144], 00:30:09.229 | 99.99th=[ 144] 00:30:09.229 bw ( KiB/s): min= 768, max= 1072, per=3.91%, avg=882.05, stdev=100.73, samples=20 00:30:09.229 iops : min= 192, max= 268, avg=220.50, stdev=25.20, samples=20 00:30:09.229 lat (msec) : 50=11.69%, 100=80.27%, 250=8.04% 00:30:09.229 cpu : usr=39.57%, sys=1.43%, ctx=1137, majf=0, minf=1634 00:30:09.229 IO depths : 1=2.3%, 2=5.9%, 4=16.4%, 8=65.1%, 16=10.3%, 32=0.0%, >=64=0.0% 00:30:09.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.229 complete : 0=0.0%, 4=91.8%, 8=2.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.229 issued rwts: total=2215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.229 filename0: (groupid=0, jobs=1): err= 0: pid=93228: Fri Apr 26 14:12:47 2024 00:30:09.229 read: IOPS=235, BW=941KiB/s (964kB/s)(9416KiB/10003msec) 00:30:09.229 slat (nsec): min=3577, max=39542, avg=11222.07, stdev=4923.01 00:30:09.229 clat (msec): min=31, max=162, avg=67.92, stdev=20.32 00:30:09.229 lat (msec): min=31, max=162, avg=67.93, stdev=20.32 00:30:09.229 clat percentiles (msec): 00:30:09.229 | 1.00th=[ 34], 5.00th=[ 39], 10.00th=[ 44], 20.00th=[ 48], 00:30:09.229 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 68], 60.00th=[ 71], 00:30:09.229 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 94], 95.00th=[ 102], 00:30:09.229 | 99.00th=[ 124], 99.50th=[ 148], 99.90th=[ 163], 99.95th=[ 163], 00:30:09.229 | 99.99th=[ 163] 00:30:09.229 bw ( KiB/s): min= 688, max= 1200, per=4.17%, avg=941.47, stdev=134.53, samples=19 00:30:09.229 iops : min= 172, max= 300, avg=235.37, stdev=33.63, samples=19 00:30:09.229 lat (msec) : 50=23.87%, 100=70.73%, 250=5.40% 00:30:09.229 cpu : usr=38.28%, sys=1.37%, ctx=1108, majf=0, minf=1634 00:30:09.229 IO depths : 1=1.1%, 2=2.4%, 4=9.3%, 8=74.5%, 16=12.7%, 32=0.0%, >=64=0.0% 00:30:09.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.229 complete : 0=0.0%, 4=90.2%, 8=5.5%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.229 issued rwts: total=2354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.229 filename0: (groupid=0, jobs=1): err= 0: pid=93229: Fri Apr 26 14:12:47 2024 00:30:09.229 read: IOPS=217, BW=870KiB/s (891kB/s)(8728KiB/10029msec) 00:30:09.229 slat (usec): min=3, max=8029, avg=22.67, stdev=297.04 00:30:09.229 clat (msec): min=30, max=152, avg=73.37, stdev=18.00 00:30:09.229 lat (msec): min=30, max=152, avg=73.39, stdev=18.00 00:30:09.229 clat percentiles (msec): 00:30:09.229 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 59], 00:30:09.229 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 71], 60.00th=[ 73], 00:30:09.229 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 99], 95.00th=[ 105], 00:30:09.229 | 99.00th=[ 129], 99.50th=[ 129], 99.90th=[ 153], 99.95th=[ 153], 00:30:09.229 | 99.99th=[ 153] 00:30:09.229 bw ( KiB/s): min= 754, max= 1000, per=3.83%, avg=865.35, stdev=73.56, samples=20 00:30:09.229 iops : min= 188, max= 250, avg=216.30, stdev=18.45, samples=20 00:30:09.229 lat (msec) : 50=10.82%, 100=82.03%, 250=7.15% 00:30:09.229 cpu : usr=32.04%, sys=1.15%, ctx=922, majf=0, minf=1634 00:30:09.229 IO depths : 1=1.7%, 2=4.2%, 4=12.6%, 8=69.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:30:09.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.229 complete : 0=0.0%, 4=90.9%, 8=4.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.229 issued rwts: total=2182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.229 filename0: (groupid=0, jobs=1): err= 0: pid=93230: Fri Apr 26 14:12:47 2024 00:30:09.229 read: IOPS=249, BW=998KiB/s (1022kB/s)(9.79MiB/10046msec) 00:30:09.229 slat (usec): min=4, max=11031, avg=21.79, stdev=315.76 00:30:09.229 clat (msec): min=10, max=132, avg=63.91, stdev=20.61 00:30:09.229 lat (msec): min=10, max=132, avg=63.93, stdev=20.61 00:30:09.229 clat percentiles (msec): 00:30:09.229 | 1.00th=[ 12], 5.00th=[ 36], 10.00th=[ 43], 20.00th=[ 48], 00:30:09.229 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 69], 00:30:09.229 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 91], 95.00th=[ 100], 00:30:09.229 | 99.00th=[ 128], 99.50th=[ 131], 99.90th=[ 133], 99.95th=[ 133], 00:30:09.229 | 99.99th=[ 133] 00:30:09.229 bw ( KiB/s): min= 784, max= 1456, per=4.42%, avg=998.45, stdev=174.80, samples=20 00:30:09.229 iops : min= 196, max= 364, avg=249.60, stdev=43.71, samples=20 00:30:09.229 lat (msec) : 20=2.55%, 50=24.53%, 100=68.77%, 250=4.15% 00:30:09.229 cpu : usr=32.20%, sys=1.11%, ctx=893, majf=0, minf=1634 00:30:09.229 IO depths : 1=0.7%, 2=1.7%, 4=8.1%, 8=76.3%, 16=13.2%, 32=0.0%, >=64=0.0% 00:30:09.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.229 complete : 0=0.0%, 4=89.6%, 8=6.1%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.229 issued rwts: total=2507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.229 filename0: (groupid=0, jobs=1): err= 0: pid=93231: Fri Apr 26 14:12:47 2024 00:30:09.229 read: IOPS=242, BW=971KiB/s (995kB/s)(9732KiB/10019msec) 00:30:09.229 slat (usec): min=4, max=8009, avg=28.08, stdev=310.60 00:30:09.229 clat (msec): min=29, max=136, avg=65.65, stdev=16.99 00:30:09.229 lat (msec): min=29, max=136, avg=65.68, stdev=16.99 00:30:09.229 clat percentiles (msec): 00:30:09.229 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 50], 00:30:09.229 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 69], 00:30:09.229 | 70.00th=[ 71], 80.00th=[ 81], 90.00th=[ 91], 95.00th=[ 97], 00:30:09.229 | 99.00th=[ 109], 99.50th=[ 111], 99.90th=[ 136], 99.95th=[ 136], 00:30:09.229 | 99.99th=[ 136] 00:30:09.229 bw ( KiB/s): min= 768, max= 1200, per=4.31%, avg=972.21, stdev=106.14, samples=19 00:30:09.229 iops : min= 192, max= 300, avg=243.05, stdev=26.54, samples=19 00:30:09.229 lat (msec) : 50=21.74%, 100=74.89%, 250=3.37% 00:30:09.229 cpu : usr=34.49%, sys=1.08%, ctx=1079, majf=0, minf=1636 00:30:09.229 IO depths : 1=1.1%, 2=2.6%, 4=9.5%, 8=74.4%, 16=12.5%, 32=0.0%, >=64=0.0% 00:30:09.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.229 complete : 0=0.0%, 4=90.1%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.229 issued rwts: total=2433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.229 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.229 filename0: (groupid=0, jobs=1): err= 0: pid=93232: Fri Apr 26 14:12:47 2024 00:30:09.229 read: IOPS=213, BW=854KiB/s (874kB/s)(8548KiB/10013msec) 00:30:09.229 slat (usec): min=3, max=8035, avg=22.77, stdev=274.37 00:30:09.229 clat (msec): min=30, max=142, avg=74.73, stdev=17.88 00:30:09.229 lat (msec): min=30, max=142, avg=74.75, stdev=17.88 00:30:09.229 clat percentiles (msec): 00:30:09.229 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 54], 20.00th=[ 61], 00:30:09.229 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 77], 00:30:09.229 | 70.00th=[ 84], 80.00th=[ 88], 90.00th=[ 97], 95.00th=[ 108], 00:30:09.229 | 99.00th=[ 124], 99.50th=[ 130], 99.90th=[ 144], 99.95th=[ 144], 00:30:09.229 | 99.99th=[ 144] 00:30:09.229 bw ( KiB/s): min= 640, max= 1016, per=3.74%, avg=845.89, stdev=94.35, samples=19 00:30:09.229 iops : min= 160, max= 254, avg=211.47, stdev=23.59, samples=19 00:30:09.229 lat (msec) : 50=7.63%, 100=84.93%, 250=7.44% 00:30:09.229 cpu : usr=34.16%, sys=1.69%, ctx=975, majf=0, minf=1636 00:30:09.230 IO depths : 1=2.0%, 2=4.4%, 4=14.1%, 8=68.5%, 16=10.9%, 32=0.0%, >=64=0.0% 00:30:09.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.230 complete : 0=0.0%, 4=90.8%, 8=3.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.230 issued rwts: total=2137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.230 filename1: (groupid=0, jobs=1): err= 0: pid=93233: Fri Apr 26 14:12:47 2024 00:30:09.230 read: IOPS=243, BW=975KiB/s (998kB/s)(9772KiB/10026msec) 00:30:09.230 slat (usec): min=3, max=8018, avg=18.64, stdev=224.71 00:30:09.230 clat (msec): min=14, max=144, avg=65.51, stdev=20.30 00:30:09.230 lat (msec): min=14, max=144, avg=65.53, stdev=20.30 00:30:09.230 clat percentiles (msec): 00:30:09.230 | 1.00th=[ 21], 5.00th=[ 39], 10.00th=[ 43], 20.00th=[ 48], 00:30:09.230 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 64], 60.00th=[ 70], 00:30:09.230 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 93], 95.00th=[ 104], 00:30:09.230 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 146], 99.95th=[ 146], 00:30:09.230 | 99.99th=[ 146] 00:30:09.230 bw ( KiB/s): min= 768, max= 1378, per=4.30%, avg=970.50, stdev=142.56, samples=20 00:30:09.230 iops : min= 192, max= 344, avg=242.60, stdev=35.57, samples=20 00:30:09.230 lat (msec) : 20=0.65%, 50=26.61%, 100=66.43%, 250=6.30% 00:30:09.230 cpu : usr=36.10%, sys=1.40%, ctx=1051, majf=0, minf=1637 00:30:09.230 IO depths : 1=1.3%, 2=2.9%, 4=9.7%, 8=73.7%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:09.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.230 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.230 issued rwts: total=2443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.230 filename1: (groupid=0, jobs=1): err= 0: pid=93234: Fri Apr 26 14:12:47 2024 00:30:09.230 read: IOPS=264, BW=1060KiB/s (1085kB/s)(10.4MiB/10032msec) 00:30:09.230 slat (usec): min=3, max=8029, avg=21.50, stdev=258.07 00:30:09.230 clat (msec): min=17, max=138, avg=60.24, stdev=17.44 00:30:09.230 lat (msec): min=17, max=138, avg=60.26, stdev=17.45 00:30:09.230 clat percentiles (msec): 00:30:09.230 | 1.00th=[ 33], 5.00th=[ 38], 10.00th=[ 41], 20.00th=[ 46], 00:30:09.230 | 30.00th=[ 48], 40.00th=[ 54], 50.00th=[ 58], 60.00th=[ 63], 00:30:09.230 | 70.00th=[ 68], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 94], 00:30:09.230 | 99.00th=[ 111], 99.50th=[ 120], 99.90th=[ 131], 99.95th=[ 131], 00:30:09.230 | 99.99th=[ 138] 00:30:09.230 bw ( KiB/s): min= 848, max= 1280, per=4.68%, avg=1056.65, stdev=134.23, samples=20 00:30:09.230 iops : min= 212, max= 320, avg=264.15, stdev=33.54, samples=20 00:30:09.230 lat (msec) : 20=0.60%, 50=34.69%, 100=62.60%, 250=2.11% 00:30:09.230 cpu : usr=41.80%, sys=1.58%, ctx=1186, majf=0, minf=1637 00:30:09.230 IO depths : 1=1.1%, 2=2.3%, 4=10.2%, 8=74.2%, 16=12.1%, 32=0.0%, >=64=0.0% 00:30:09.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.230 complete : 0=0.0%, 4=90.1%, 8=5.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.230 issued rwts: total=2658,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.230 filename1: (groupid=0, jobs=1): err= 0: pid=93235: Fri Apr 26 14:12:47 2024 00:30:09.230 read: IOPS=292, BW=1172KiB/s (1200kB/s)(11.5MiB/10023msec) 00:30:09.230 slat (usec): min=3, max=7021, avg=18.95, stdev=203.65 00:30:09.230 clat (usec): min=1450, max=111899, avg=54451.41, stdev=19548.06 00:30:09.230 lat (usec): min=1460, max=111906, avg=54470.36, stdev=19549.50 00:30:09.230 clat percentiles (usec): 00:30:09.230 | 1.00th=[ 1516], 5.00th=[ 13960], 10.00th=[ 38011], 20.00th=[ 42730], 00:30:09.230 | 30.00th=[ 46400], 40.00th=[ 49021], 50.00th=[ 54264], 60.00th=[ 58983], 00:30:09.230 | 70.00th=[ 63177], 80.00th=[ 68682], 90.00th=[ 79168], 95.00th=[ 87557], 00:30:09.230 | 99.00th=[102237], 99.50th=[108528], 99.90th=[111674], 99.95th=[111674], 00:30:09.230 | 99.99th=[111674] 00:30:09.230 bw ( KiB/s): min= 896, max= 2523, per=5.18%, avg=1169.10, stdev=334.89, samples=20 00:30:09.230 iops : min= 224, max= 630, avg=292.20, stdev=83.57, samples=20 00:30:09.230 lat (msec) : 2=2.18%, 4=1.67%, 10=0.58%, 20=2.04%, 50=36.04% 00:30:09.230 lat (msec) : 100=56.16%, 250=1.33% 00:30:09.230 cpu : usr=43.15%, sys=1.83%, ctx=1329, majf=0, minf=1637 00:30:09.230 IO depths : 1=1.3%, 2=3.4%, 4=11.8%, 8=71.6%, 16=11.9%, 32=0.0%, >=64=0.0% 00:30:09.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.230 complete : 0=0.0%, 4=90.5%, 8=4.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.230 issued rwts: total=2936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.230 filename1: (groupid=0, jobs=1): err= 0: pid=93236: Fri Apr 26 14:12:47 2024 00:30:09.230 read: IOPS=254, BW=1019KiB/s (1043kB/s)(9.98MiB/10030msec) 00:30:09.230 slat (usec): min=3, max=4029, avg=20.45, stdev=194.63 00:30:09.230 clat (msec): min=23, max=122, avg=62.65, stdev=18.14 00:30:09.230 lat (msec): min=23, max=122, avg=62.67, stdev=18.14 00:30:09.230 clat percentiles (msec): 00:30:09.230 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 41], 20.00th=[ 47], 00:30:09.230 | 30.00th=[ 52], 40.00th=[ 59], 50.00th=[ 62], 60.00th=[ 65], 00:30:09.230 | 70.00th=[ 70], 80.00th=[ 79], 90.00th=[ 90], 95.00th=[ 95], 00:30:09.230 | 99.00th=[ 110], 99.50th=[ 117], 99.90th=[ 124], 99.95th=[ 124], 00:30:09.230 | 99.99th=[ 124] 00:30:09.230 bw ( KiB/s): min= 768, max= 1376, per=4.50%, avg=1015.60, stdev=154.55, samples=20 00:30:09.230 iops : min= 192, max= 344, avg=253.90, stdev=38.64, samples=20 00:30:09.230 lat (msec) : 50=28.73%, 100=67.87%, 250=3.41% 00:30:09.230 cpu : usr=42.43%, sys=1.79%, ctx=1311, majf=0, minf=1634 00:30:09.230 IO depths : 1=1.8%, 2=3.8%, 4=11.2%, 8=71.7%, 16=11.5%, 32=0.0%, >=64=0.0% 00:30:09.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.230 complete : 0=0.0%, 4=90.4%, 8=4.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.230 issued rwts: total=2555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.230 filename1: (groupid=0, jobs=1): err= 0: pid=93237: Fri Apr 26 14:12:47 2024 00:30:09.230 read: IOPS=229, BW=918KiB/s (940kB/s)(9188KiB/10012msec) 00:30:09.230 slat (usec): min=3, max=3556, avg=13.01, stdev=74.31 00:30:09.230 clat (msec): min=31, max=182, avg=69.63, stdev=20.01 00:30:09.230 lat (msec): min=31, max=182, avg=69.65, stdev=20.01 00:30:09.230 clat percentiles (msec): 00:30:09.230 | 1.00th=[ 33], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 55], 00:30:09.230 | 30.00th=[ 62], 40.00th=[ 66], 50.00th=[ 68], 60.00th=[ 71], 00:30:09.230 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 93], 95.00th=[ 106], 00:30:09.230 | 99.00th=[ 134], 99.50th=[ 161], 99.90th=[ 182], 99.95th=[ 182], 00:30:09.230 | 99.99th=[ 182] 00:30:09.230 bw ( KiB/s): min= 764, max= 1152, per=3.99%, avg=901.26, stdev=105.65, samples=19 00:30:09.230 iops : min= 191, max= 288, avg=225.32, stdev=26.41, samples=19 00:30:09.230 lat (msec) : 50=16.54%, 100=76.32%, 250=7.14% 00:30:09.230 cpu : usr=39.78%, sys=1.59%, ctx=1385, majf=0, minf=1636 00:30:09.230 IO depths : 1=2.3%, 2=5.5%, 4=15.5%, 8=66.2%, 16=10.6%, 32=0.0%, >=64=0.0% 00:30:09.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.230 complete : 0=0.0%, 4=91.5%, 8=3.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.230 issued rwts: total=2297,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.230 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.230 filename1: (groupid=0, jobs=1): err= 0: pid=93238: Fri Apr 26 14:12:47 2024 00:30:09.230 read: IOPS=215, BW=863KiB/s (884kB/s)(8640KiB/10008msec) 00:30:09.230 slat (usec): min=3, max=555, avg=12.14, stdev=12.75 00:30:09.230 clat (msec): min=17, max=136, avg=74.01, stdev=17.76 00:30:09.230 lat (msec): min=17, max=136, avg=74.02, stdev=17.76 00:30:09.230 clat percentiles (msec): 00:30:09.230 | 1.00th=[ 34], 5.00th=[ 47], 10.00th=[ 52], 20.00th=[ 62], 00:30:09.230 | 30.00th=[ 66], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 77], 00:30:09.230 | 70.00th=[ 83], 80.00th=[ 89], 90.00th=[ 97], 95.00th=[ 110], 00:30:09.230 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 134], 99.95th=[ 138], 00:30:09.230 | 99.99th=[ 138] 00:30:09.230 bw ( KiB/s): min= 736, max= 976, per=3.79%, avg=855.21, stdev=84.37, samples=19 00:30:09.231 iops : min= 184, max= 244, avg=213.79, stdev=21.09, samples=19 00:30:09.231 lat (msec) : 20=0.74%, 50=7.73%, 100=84.12%, 250=7.41% 00:30:09.231 cpu : usr=42.96%, sys=1.74%, ctx=1341, majf=0, minf=1636 00:30:09.231 IO depths : 1=2.7%, 2=6.1%, 4=16.2%, 8=64.6%, 16=10.4%, 32=0.0%, >=64=0.0% 00:30:09.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.231 complete : 0=0.0%, 4=91.7%, 8=3.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.231 issued rwts: total=2160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.231 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.231 filename1: (groupid=0, jobs=1): err= 0: pid=93239: Fri Apr 26 14:12:47 2024 00:30:09.231 read: IOPS=210, BW=840KiB/s (860kB/s)(8420KiB/10020msec) 00:30:09.231 slat (usec): min=3, max=8030, avg=19.30, stdev=247.08 00:30:09.231 clat (msec): min=31, max=155, avg=76.02, stdev=20.20 00:30:09.231 lat (msec): min=31, max=155, avg=76.04, stdev=20.20 00:30:09.231 clat percentiles (msec): 00:30:09.231 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 61], 00:30:09.231 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 80], 00:30:09.231 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 100], 95.00th=[ 113], 00:30:09.231 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 157], 99.95th=[ 157], 00:30:09.231 | 99.99th=[ 157] 00:30:09.231 bw ( KiB/s): min= 680, max= 1008, per=3.66%, avg=825.32, stdev=88.62, samples=19 00:30:09.231 iops : min= 170, max= 252, avg=206.32, stdev=22.16, samples=19 00:30:09.231 lat (msec) : 50=9.93%, 100=80.48%, 250=9.60% 00:30:09.231 cpu : usr=33.91%, sys=1.45%, ctx=960, majf=0, minf=1636 00:30:09.231 IO depths : 1=2.1%, 2=4.8%, 4=14.5%, 8=67.3%, 16=11.2%, 32=0.0%, >=64=0.0% 00:30:09.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.231 complete : 0=0.0%, 4=90.7%, 8=4.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.231 issued rwts: total=2105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.231 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.231 filename1: (groupid=0, jobs=1): err= 0: pid=93240: Fri Apr 26 14:12:47 2024 00:30:09.231 read: IOPS=216, BW=866KiB/s (887kB/s)(8664KiB/10001msec) 00:30:09.231 slat (usec): min=3, max=4006, avg=12.86, stdev=86.03 00:30:09.231 clat (usec): min=793, max=141712, avg=73775.81, stdev=23427.40 00:30:09.231 lat (usec): min=800, max=141726, avg=73788.67, stdev=23426.74 00:30:09.231 clat percentiles (usec): 00:30:09.231 | 1.00th=[ 1500], 5.00th=[ 39060], 10.00th=[ 46400], 20.00th=[ 61604], 00:30:09.231 | 30.00th=[ 66323], 40.00th=[ 68682], 50.00th=[ 70779], 60.00th=[ 78119], 00:30:09.231 | 70.00th=[ 83362], 80.00th=[ 90702], 90.00th=[104334], 95.00th=[110625], 00:30:09.231 | 99.00th=[129500], 99.50th=[139461], 99.90th=[141558], 99.95th=[141558], 00:30:09.231 | 99.99th=[141558] 00:30:09.231 bw ( KiB/s): min= 640, max= 1072, per=3.71%, avg=837.89, stdev=104.30, samples=19 00:30:09.231 iops : min= 160, max= 268, avg=209.47, stdev=26.08, samples=19 00:30:09.231 lat (usec) : 1000=0.23% 00:30:09.231 lat (msec) : 2=1.48%, 4=1.25%, 20=0.23%, 50=9.37%, 100=74.88% 00:30:09.231 lat (msec) : 250=12.56% 00:30:09.231 cpu : usr=43.51%, sys=1.88%, ctx=1363, majf=0, minf=1634 00:30:09.231 IO depths : 1=3.2%, 2=7.0%, 4=17.8%, 8=62.2%, 16=9.8%, 32=0.0%, >=64=0.0% 00:30:09.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.231 complete : 0=0.0%, 4=91.9%, 8=2.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.231 issued rwts: total=2166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.231 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.231 filename2: (groupid=0, jobs=1): err= 0: pid=93241: Fri Apr 26 14:12:47 2024 00:30:09.231 read: IOPS=281, BW=1127KiB/s (1154kB/s)(11.0MiB/10034msec) 00:30:09.231 slat (usec): min=3, max=4030, avg=13.26, stdev=94.76 00:30:09.231 clat (usec): min=1628, max=130885, avg=56629.87, stdev=20774.17 00:30:09.231 lat (usec): min=1636, max=130892, avg=56643.13, stdev=20775.89 00:30:09.231 clat percentiles (msec): 00:30:09.231 | 1.00th=[ 3], 5.00th=[ 22], 10.00th=[ 39], 20.00th=[ 44], 00:30:09.231 | 30.00th=[ 47], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 61], 00:30:09.231 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 83], 95.00th=[ 96], 00:30:09.231 | 99.00th=[ 109], 99.50th=[ 115], 99.90th=[ 131], 99.95th=[ 131], 00:30:09.231 | 99.99th=[ 131] 00:30:09.231 bw ( KiB/s): min= 856, max= 2176, per=4.99%, avg=1128.00, stdev=286.09, samples=20 00:30:09.231 iops : min= 214, max= 544, avg=282.00, stdev=71.52, samples=20 00:30:09.231 lat (msec) : 2=0.57%, 4=1.70%, 10=2.26%, 50=36.22%, 100=55.50% 00:30:09.231 lat (msec) : 250=3.75% 00:30:09.231 cpu : usr=41.83%, sys=1.57%, ctx=1240, majf=0, minf=1637 00:30:09.231 IO depths : 1=0.7%, 2=1.8%, 4=8.7%, 8=76.1%, 16=12.6%, 32=0.0%, >=64=0.0% 00:30:09.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.231 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.231 issued rwts: total=2827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.231 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.231 filename2: (groupid=0, jobs=1): err= 0: pid=93242: Fri Apr 26 14:12:47 2024 00:30:09.231 read: IOPS=229, BW=916KiB/s (938kB/s)(9216KiB/10061msec) 00:30:09.231 slat (usec): min=3, max=8032, avg=18.11, stdev=236.18 00:30:09.231 clat (msec): min=27, max=137, avg=69.68, stdev=20.00 00:30:09.231 lat (msec): min=27, max=137, avg=69.69, stdev=20.01 00:30:09.231 clat percentiles (msec): 00:30:09.231 | 1.00th=[ 37], 5.00th=[ 45], 10.00th=[ 46], 20.00th=[ 51], 00:30:09.231 | 30.00th=[ 59], 40.00th=[ 62], 50.00th=[ 69], 60.00th=[ 72], 00:30:09.231 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 106], 00:30:09.231 | 99.00th=[ 129], 99.50th=[ 129], 99.90th=[ 130], 99.95th=[ 130], 00:30:09.231 | 99.99th=[ 138] 00:30:09.231 bw ( KiB/s): min= 640, max= 1040, per=4.05%, avg=914.75, stdev=115.63, samples=20 00:30:09.231 iops : min= 160, max= 260, avg=228.65, stdev=28.87, samples=20 00:30:09.231 lat (msec) : 50=19.97%, 100=72.14%, 250=7.90% 00:30:09.231 cpu : usr=31.73%, sys=1.41%, ctx=915, majf=0, minf=1637 00:30:09.231 IO depths : 1=1.3%, 2=3.0%, 4=11.6%, 8=72.3%, 16=11.8%, 32=0.0%, >=64=0.0% 00:30:09.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.231 complete : 0=0.0%, 4=90.2%, 8=4.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.231 issued rwts: total=2304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.231 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.231 filename2: (groupid=0, jobs=1): err= 0: pid=93243: Fri Apr 26 14:12:47 2024 00:30:09.231 read: IOPS=211, BW=844KiB/s (865kB/s)(8448KiB/10004msec) 00:30:09.231 slat (usec): min=3, max=8033, avg=17.07, stdev=195.18 00:30:09.231 clat (msec): min=4, max=153, avg=75.66, stdev=20.97 00:30:09.231 lat (msec): min=4, max=153, avg=75.68, stdev=20.96 00:30:09.231 clat percentiles (msec): 00:30:09.231 | 1.00th=[ 15], 5.00th=[ 47], 10.00th=[ 56], 20.00th=[ 62], 00:30:09.231 | 30.00th=[ 66], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 77], 00:30:09.231 | 70.00th=[ 83], 80.00th=[ 91], 90.00th=[ 106], 95.00th=[ 117], 00:30:09.231 | 99.00th=[ 129], 99.50th=[ 142], 99.90th=[ 155], 99.95th=[ 155], 00:30:09.231 | 99.99th=[ 155] 00:30:09.231 bw ( KiB/s): min= 640, max= 1024, per=3.63%, avg=818.05, stdev=86.85, samples=19 00:30:09.231 iops : min= 160, max= 256, avg=204.47, stdev=21.74, samples=19 00:30:09.231 lat (msec) : 10=0.76%, 20=0.47%, 50=5.82%, 100=80.87%, 250=12.07% 00:30:09.231 cpu : usr=33.37%, sys=1.38%, ctx=1164, majf=0, minf=1636 00:30:09.231 IO depths : 1=3.1%, 2=7.1%, 4=17.9%, 8=62.2%, 16=9.7%, 32=0.0%, >=64=0.0% 00:30:09.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.231 complete : 0=0.0%, 4=92.2%, 8=2.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.232 issued rwts: total=2112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.232 filename2: (groupid=0, jobs=1): err= 0: pid=93244: Fri Apr 26 14:12:47 2024 00:30:09.232 read: IOPS=214, BW=859KiB/s (880kB/s)(8604KiB/10015msec) 00:30:09.232 slat (usec): min=3, max=8034, avg=22.68, stdev=299.46 00:30:09.232 clat (msec): min=17, max=153, avg=74.29, stdev=17.84 00:30:09.232 lat (msec): min=17, max=153, avg=74.31, stdev=17.85 00:30:09.232 clat percentiles (msec): 00:30:09.232 | 1.00th=[ 25], 5.00th=[ 47], 10.00th=[ 57], 20.00th=[ 61], 00:30:09.232 | 30.00th=[ 66], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:30:09.232 | 70.00th=[ 84], 80.00th=[ 87], 90.00th=[ 96], 95.00th=[ 107], 00:30:09.232 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 155], 99.95th=[ 155], 00:30:09.232 | 99.99th=[ 155] 00:30:09.232 bw ( KiB/s): min= 728, max= 944, per=3.74%, avg=844.68, stdev=68.16, samples=19 00:30:09.232 iops : min= 182, max= 236, avg=211.16, stdev=17.04, samples=19 00:30:09.232 lat (msec) : 20=0.74%, 50=6.00%, 100=85.22%, 250=8.04% 00:30:09.232 cpu : usr=31.98%, sys=1.29%, ctx=886, majf=0, minf=1636 00:30:09.232 IO depths : 1=2.6%, 2=5.9%, 4=16.0%, 8=65.0%, 16=10.6%, 32=0.0%, >=64=0.0% 00:30:09.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.232 complete : 0=0.0%, 4=91.5%, 8=3.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.232 issued rwts: total=2151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.232 filename2: (groupid=0, jobs=1): err= 0: pid=93245: Fri Apr 26 14:12:47 2024 00:30:09.232 read: IOPS=238, BW=955KiB/s (978kB/s)(9592KiB/10044msec) 00:30:09.232 slat (usec): min=5, max=8020, avg=14.50, stdev=163.64 00:30:09.232 clat (msec): min=22, max=137, avg=66.84, stdev=19.96 00:30:09.232 lat (msec): min=22, max=137, avg=66.86, stdev=19.96 00:30:09.232 clat percentiles (msec): 00:30:09.232 | 1.00th=[ 26], 5.00th=[ 40], 10.00th=[ 44], 20.00th=[ 49], 00:30:09.232 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 65], 60.00th=[ 70], 00:30:09.232 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 105], 00:30:09.232 | 99.00th=[ 118], 99.50th=[ 131], 99.90th=[ 138], 99.95th=[ 138], 00:30:09.232 | 99.99th=[ 138] 00:30:09.232 bw ( KiB/s): min= 736, max= 1376, per=4.22%, avg=953.15, stdev=171.61, samples=20 00:30:09.232 iops : min= 184, max= 344, avg=238.10, stdev=42.79, samples=20 00:30:09.232 lat (msec) : 50=22.85%, 100=70.73%, 250=6.42% 00:30:09.232 cpu : usr=44.17%, sys=1.45%, ctx=1584, majf=0, minf=1635 00:30:09.232 IO depths : 1=0.3%, 2=0.6%, 4=5.1%, 8=79.1%, 16=14.9%, 32=0.0%, >=64=0.0% 00:30:09.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.232 complete : 0=0.0%, 4=89.4%, 8=7.6%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.232 issued rwts: total=2398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.232 filename2: (groupid=0, jobs=1): err= 0: pid=93246: Fri Apr 26 14:12:47 2024 00:30:09.232 read: IOPS=240, BW=960KiB/s (984kB/s)(9644KiB/10041msec) 00:30:09.232 slat (usec): min=4, max=8046, avg=29.19, stdev=365.31 00:30:09.232 clat (msec): min=30, max=147, avg=66.39, stdev=18.98 00:30:09.232 lat (msec): min=30, max=147, avg=66.42, stdev=18.99 00:30:09.232 clat percentiles (msec): 00:30:09.232 | 1.00th=[ 34], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 48], 00:30:09.232 | 30.00th=[ 54], 40.00th=[ 60], 50.00th=[ 66], 60.00th=[ 71], 00:30:09.232 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 91], 95.00th=[ 97], 00:30:09.232 | 99.00th=[ 123], 99.50th=[ 127], 99.90th=[ 148], 99.95th=[ 148], 00:30:09.232 | 99.99th=[ 148] 00:30:09.232 bw ( KiB/s): min= 728, max= 1288, per=4.24%, avg=957.70, stdev=133.29, samples=20 00:30:09.232 iops : min= 182, max= 322, avg=239.40, stdev=33.32, samples=20 00:30:09.232 lat (msec) : 50=25.47%, 100=70.22%, 250=4.31% 00:30:09.232 cpu : usr=33.36%, sys=1.37%, ctx=1100, majf=0, minf=1636 00:30:09.232 IO depths : 1=0.9%, 2=1.8%, 4=8.9%, 8=75.9%, 16=12.6%, 32=0.0%, >=64=0.0% 00:30:09.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.232 complete : 0=0.0%, 4=89.6%, 8=5.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.232 issued rwts: total=2411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.232 filename2: (groupid=0, jobs=1): err= 0: pid=93247: Fri Apr 26 14:12:47 2024 00:30:09.232 read: IOPS=224, BW=897KiB/s (918kB/s)(8976KiB/10009msec) 00:30:09.232 slat (usec): min=3, max=4043, avg=16.20, stdev=146.82 00:30:09.232 clat (msec): min=35, max=154, avg=71.24, stdev=18.66 00:30:09.232 lat (msec): min=35, max=154, avg=71.26, stdev=18.66 00:30:09.232 clat percentiles (msec): 00:30:09.232 | 1.00th=[ 37], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 57], 00:30:09.232 | 30.00th=[ 63], 40.00th=[ 66], 50.00th=[ 70], 60.00th=[ 72], 00:30:09.232 | 70.00th=[ 79], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 102], 00:30:09.232 | 99.00th=[ 131], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:30:09.232 | 99.99th=[ 155] 00:30:09.232 bw ( KiB/s): min= 640, max= 1128, per=3.94%, avg=890.95, stdev=113.65, samples=19 00:30:09.232 iops : min= 160, max= 282, avg=222.74, stdev=28.41, samples=19 00:30:09.232 lat (msec) : 50=13.19%, 100=81.55%, 250=5.26% 00:30:09.232 cpu : usr=38.71%, sys=1.60%, ctx=1209, majf=0, minf=1634 00:30:09.232 IO depths : 1=2.6%, 2=5.8%, 4=15.2%, 8=66.1%, 16=10.3%, 32=0.0%, >=64=0.0% 00:30:09.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.232 complete : 0=0.0%, 4=91.4%, 8=3.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.232 issued rwts: total=2244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.232 filename2: (groupid=0, jobs=1): err= 0: pid=93248: Fri Apr 26 14:12:47 2024 00:30:09.232 read: IOPS=224, BW=897KiB/s (919kB/s)(8984KiB/10011msec) 00:30:09.232 slat (usec): min=3, max=11029, avg=21.87, stdev=282.08 00:30:09.232 clat (msec): min=34, max=145, avg=71.18, stdev=17.96 00:30:09.232 lat (msec): min=34, max=145, avg=71.20, stdev=17.97 00:30:09.232 clat percentiles (msec): 00:30:09.232 | 1.00th=[ 38], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 60], 00:30:09.232 | 30.00th=[ 64], 40.00th=[ 67], 50.00th=[ 69], 60.00th=[ 72], 00:30:09.232 | 70.00th=[ 79], 80.00th=[ 84], 90.00th=[ 95], 95.00th=[ 102], 00:30:09.232 | 99.00th=[ 130], 99.50th=[ 140], 99.90th=[ 146], 99.95th=[ 146], 00:30:09.232 | 99.99th=[ 146] 00:30:09.232 bw ( KiB/s): min= 720, max= 1208, per=3.95%, avg=892.00, stdev=126.40, samples=20 00:30:09.232 iops : min= 180, max= 302, avg=223.00, stdev=31.60, samples=20 00:30:09.232 lat (msec) : 50=12.82%, 100=80.23%, 250=6.95% 00:30:09.232 cpu : usr=42.68%, sys=1.58%, ctx=1353, majf=0, minf=1637 00:30:09.232 IO depths : 1=2.9%, 2=6.4%, 4=17.2%, 8=63.7%, 16=9.8%, 32=0.0%, >=64=0.0% 00:30:09.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.232 complete : 0=0.0%, 4=91.7%, 8=2.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:09.232 issued rwts: total=2246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:09.232 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:09.232 00:30:09.232 Run status group 0 (all jobs): 00:30:09.232 READ: bw=22.0MiB/s (23.1MB/s), 840KiB/s-1172KiB/s (860kB/s-1200kB/s), io=222MiB (232MB), run=10001-10061msec 00:30:09.491 ----------------------------------------------------- 00:30:09.491 Suppressions used: 00:30:09.491 count bytes template 00:30:09.491 45 402 /usr/src/fio/parse.c 00:30:09.491 1 8 libtcmalloc_minimal.so 00:30:09.491 1 904 libcrypto.so 00:30:09.491 ----------------------------------------------------- 00:30:09.491 00:30:09.491 14:12:49 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:09.491 14:12:49 -- target/dif.sh@43 -- # local sub 00:30:09.491 14:12:49 -- target/dif.sh@45 -- # for sub in "$@" 00:30:09.491 14:12:49 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:09.491 14:12:49 -- target/dif.sh@36 -- # local sub_id=0 00:30:09.491 14:12:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:09.491 14:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.491 14:12:49 -- common/autotest_common.sh@10 -- # set +x 00:30:09.491 14:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.491 14:12:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:09.491 14:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.491 14:12:49 -- common/autotest_common.sh@10 -- # set +x 00:30:09.750 14:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.750 14:12:49 -- target/dif.sh@45 -- # for sub in "$@" 00:30:09.750 14:12:49 -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:09.750 14:12:49 -- target/dif.sh@36 -- # local sub_id=1 00:30:09.750 14:12:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:09.750 14:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.750 14:12:49 -- common/autotest_common.sh@10 -- # set +x 00:30:09.750 14:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.750 14:12:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:09.750 14:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.750 14:12:49 -- common/autotest_common.sh@10 -- # set +x 00:30:09.750 14:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.750 14:12:49 -- target/dif.sh@45 -- # for sub in "$@" 00:30:09.750 14:12:49 -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:09.750 14:12:49 -- target/dif.sh@36 -- # local sub_id=2 00:30:09.750 14:12:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:09.750 14:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.750 14:12:49 -- common/autotest_common.sh@10 -- # set +x 00:30:09.750 14:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.750 14:12:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:09.750 14:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.750 14:12:49 -- common/autotest_common.sh@10 -- # set +x 00:30:09.750 14:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.750 14:12:49 -- target/dif.sh@115 -- # NULL_DIF=1 00:30:09.750 14:12:49 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:09.750 14:12:49 -- target/dif.sh@115 -- # numjobs=2 00:30:09.750 14:12:49 -- target/dif.sh@115 -- # iodepth=8 00:30:09.750 14:12:49 -- target/dif.sh@115 -- # runtime=5 00:30:09.750 14:12:49 -- target/dif.sh@115 -- # files=1 00:30:09.750 14:12:49 -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:09.750 14:12:49 -- target/dif.sh@28 -- # local sub 00:30:09.750 14:12:49 -- target/dif.sh@30 -- # for sub in "$@" 00:30:09.750 14:12:49 -- target/dif.sh@31 -- # create_subsystem 0 00:30:09.750 14:12:49 -- target/dif.sh@18 -- # local sub_id=0 00:30:09.750 14:12:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:09.750 14:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.750 14:12:49 -- common/autotest_common.sh@10 -- # set +x 00:30:09.750 bdev_null0 00:30:09.750 14:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.750 14:12:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:09.750 14:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.750 14:12:49 -- common/autotest_common.sh@10 -- # set +x 00:30:09.750 14:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.750 14:12:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:09.750 14:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.750 14:12:49 -- common/autotest_common.sh@10 -- # set +x 00:30:09.750 14:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.750 14:12:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:09.750 14:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.750 14:12:49 -- common/autotest_common.sh@10 -- # set +x 00:30:09.750 [2024-04-26 14:12:49.245430] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:09.750 14:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.750 14:12:49 -- target/dif.sh@30 -- # for sub in "$@" 00:30:09.750 14:12:49 -- target/dif.sh@31 -- # create_subsystem 1 00:30:09.750 14:12:49 -- target/dif.sh@18 -- # local sub_id=1 00:30:09.751 14:12:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:09.751 14:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.751 14:12:49 -- common/autotest_common.sh@10 -- # set +x 00:30:09.751 bdev_null1 00:30:09.751 14:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.751 14:12:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:09.751 14:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.751 14:12:49 -- common/autotest_common.sh@10 -- # set +x 00:30:09.751 14:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.751 14:12:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:09.751 14:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.751 14:12:49 -- common/autotest_common.sh@10 -- # set +x 00:30:09.751 14:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.751 14:12:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:09.751 14:12:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:09.751 14:12:49 -- common/autotest_common.sh@10 -- # set +x 00:30:09.751 14:12:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:09.751 14:12:49 -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:09.751 14:12:49 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:09.751 14:12:49 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:09.751 14:12:49 -- nvmf/common.sh@521 -- # config=() 00:30:09.751 14:12:49 -- target/dif.sh@82 -- # gen_fio_conf 00:30:09.751 14:12:49 -- nvmf/common.sh@521 -- # local subsystem config 00:30:09.751 14:12:49 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:09.751 14:12:49 -- target/dif.sh@54 -- # local file 00:30:09.751 14:12:49 -- target/dif.sh@56 -- # cat 00:30:09.751 14:12:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:09.751 14:12:49 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:09.751 14:12:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:09.751 { 00:30:09.751 "params": { 00:30:09.751 "name": "Nvme$subsystem", 00:30:09.751 "trtype": "$TEST_TRANSPORT", 00:30:09.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.751 "adrfam": "ipv4", 00:30:09.751 "trsvcid": "$NVMF_PORT", 00:30:09.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.751 "hdgst": ${hdgst:-false}, 00:30:09.751 "ddgst": ${ddgst:-false} 00:30:09.751 }, 00:30:09.751 "method": "bdev_nvme_attach_controller" 00:30:09.751 } 00:30:09.751 EOF 00:30:09.751 )") 00:30:09.751 14:12:49 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:09.751 14:12:49 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:09.751 14:12:49 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:09.751 14:12:49 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:09.751 14:12:49 -- common/autotest_common.sh@1327 -- # shift 00:30:09.751 14:12:49 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:09.751 14:12:49 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:09.751 14:12:49 -- nvmf/common.sh@543 -- # cat 00:30:09.751 14:12:49 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:09.751 14:12:49 -- target/dif.sh@72 -- # (( file <= files )) 00:30:09.751 14:12:49 -- target/dif.sh@73 -- # cat 00:30:09.751 14:12:49 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:09.751 14:12:49 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:09.751 14:12:49 -- target/dif.sh@72 -- # (( file++ )) 00:30:09.751 14:12:49 -- target/dif.sh@72 -- # (( file <= files )) 00:30:09.751 14:12:49 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:09.751 14:12:49 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:09.751 14:12:49 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:09.751 { 00:30:09.751 "params": { 00:30:09.751 "name": "Nvme$subsystem", 00:30:09.751 "trtype": "$TEST_TRANSPORT", 00:30:09.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:09.751 "adrfam": "ipv4", 00:30:09.751 "trsvcid": "$NVMF_PORT", 00:30:09.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:09.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:09.751 "hdgst": ${hdgst:-false}, 00:30:09.751 "ddgst": ${ddgst:-false} 00:30:09.751 }, 00:30:09.751 "method": "bdev_nvme_attach_controller" 00:30:09.751 } 00:30:09.751 EOF 00:30:09.751 )") 00:30:09.751 14:12:49 -- nvmf/common.sh@543 -- # cat 00:30:09.751 14:12:49 -- nvmf/common.sh@545 -- # jq . 00:30:09.751 14:12:49 -- nvmf/common.sh@546 -- # IFS=, 00:30:09.751 14:12:49 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:09.751 "params": { 00:30:09.751 "name": "Nvme0", 00:30:09.751 "trtype": "tcp", 00:30:09.751 "traddr": "10.0.0.2", 00:30:09.751 "adrfam": "ipv4", 00:30:09.751 "trsvcid": "4420", 00:30:09.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:09.751 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:09.751 "hdgst": false, 00:30:09.751 "ddgst": false 00:30:09.751 }, 00:30:09.751 "method": "bdev_nvme_attach_controller" 00:30:09.751 },{ 00:30:09.751 "params": { 00:30:09.751 "name": "Nvme1", 00:30:09.751 "trtype": "tcp", 00:30:09.751 "traddr": "10.0.0.2", 00:30:09.751 "adrfam": "ipv4", 00:30:09.751 "trsvcid": "4420", 00:30:09.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:09.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:09.751 "hdgst": false, 00:30:09.751 "ddgst": false 00:30:09.751 }, 00:30:09.751 "method": "bdev_nvme_attach_controller" 00:30:09.751 }' 00:30:09.751 14:12:49 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:09.751 14:12:49 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:09.751 14:12:49 -- common/autotest_common.sh@1333 -- # break 00:30:09.751 14:12:49 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:09.751 14:12:49 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:10.010 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:10.010 ... 00:30:10.010 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:10.010 ... 00:30:10.010 fio-3.35 00:30:10.010 Starting 4 threads 00:30:16.571 00:30:16.571 filename0: (groupid=0, jobs=1): err= 0: pid=93395: Fri Apr 26 14:12:55 2024 00:30:16.571 read: IOPS=2160, BW=16.9MiB/s (17.7MB/s)(84.4MiB/5002msec) 00:30:16.571 slat (nsec): min=6654, max=77826, avg=15654.91, stdev=3958.10 00:30:16.571 clat (usec): min=2675, max=10565, avg=3624.55, stdev=225.59 00:30:16.571 lat (usec): min=2703, max=10593, avg=3640.21, stdev=225.57 00:30:16.571 clat percentiles (usec): 00:30:16.571 | 1.00th=[ 3490], 5.00th=[ 3523], 10.00th=[ 3556], 20.00th=[ 3556], 00:30:16.571 | 30.00th=[ 3589], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3621], 00:30:16.571 | 70.00th=[ 3621], 80.00th=[ 3654], 90.00th=[ 3687], 95.00th=[ 3785], 00:30:16.571 | 99.00th=[ 4047], 99.50th=[ 4359], 99.90th=[ 5342], 99.95th=[10552], 00:30:16.571 | 99.99th=[10552] 00:30:16.571 bw ( KiB/s): min=16862, max=17408, per=24.96%, avg=17290.44, stdev=172.94, samples=9 00:30:16.571 iops : min= 2107, max= 2176, avg=2161.22, stdev=21.85, samples=9 00:30:16.571 lat (msec) : 4=98.85%, 10=1.07%, 20=0.07% 00:30:16.571 cpu : usr=92.76%, sys=6.14%, ctx=10, majf=0, minf=1635 00:30:16.571 IO depths : 1=12.1%, 2=25.0%, 4=50.0%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:16.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.571 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.571 issued rwts: total=10808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.571 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:16.571 filename0: (groupid=0, jobs=1): err= 0: pid=93396: Fri Apr 26 14:12:55 2024 00:30:16.571 read: IOPS=2163, BW=16.9MiB/s (17.7MB/s)(84.6MiB/5002msec) 00:30:16.571 slat (usec): min=5, max=105, avg=15.13, stdev= 3.87 00:30:16.571 clat (usec): min=1777, max=7362, avg=3622.19, stdev=201.28 00:30:16.571 lat (usec): min=1784, max=7385, avg=3637.33, stdev=201.52 00:30:16.571 clat percentiles (usec): 00:30:16.571 | 1.00th=[ 2933], 5.00th=[ 3523], 10.00th=[ 3556], 20.00th=[ 3556], 00:30:16.571 | 30.00th=[ 3589], 40.00th=[ 3589], 50.00th=[ 3621], 60.00th=[ 3621], 00:30:16.571 | 70.00th=[ 3654], 80.00th=[ 3654], 90.00th=[ 3687], 95.00th=[ 3785], 00:30:16.571 | 99.00th=[ 4080], 99.50th=[ 4621], 99.90th=[ 5735], 99.95th=[ 7308], 00:30:16.571 | 99.99th=[ 7373] 00:30:16.571 bw ( KiB/s): min=17024, max=17408, per=24.98%, avg=17304.56, stdev=121.40, samples=9 00:30:16.571 iops : min= 2128, max= 2176, avg=2163.00, stdev=15.13, samples=9 00:30:16.571 lat (msec) : 2=0.12%, 4=98.70%, 10=1.18% 00:30:16.571 cpu : usr=92.26%, sys=6.66%, ctx=29, majf=0, minf=1637 00:30:16.571 IO depths : 1=11.8%, 2=25.0%, 4=50.0%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:16.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.571 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.571 issued rwts: total=10824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.571 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:16.572 filename1: (groupid=0, jobs=1): err= 0: pid=93397: Fri Apr 26 14:12:55 2024 00:30:16.572 read: IOPS=2169, BW=16.9MiB/s (17.8MB/s)(84.8MiB/5003msec) 00:30:16.572 slat (nsec): min=4025, max=64209, avg=8772.40, stdev=4126.70 00:30:16.572 clat (usec): min=1124, max=7146, avg=3640.42, stdev=216.09 00:30:16.572 lat (usec): min=1135, max=7168, avg=3649.20, stdev=216.24 00:30:16.572 clat percentiles (usec): 00:30:16.572 | 1.00th=[ 3523], 5.00th=[ 3589], 10.00th=[ 3589], 20.00th=[ 3621], 00:30:16.572 | 30.00th=[ 3621], 40.00th=[ 3621], 50.00th=[ 3621], 60.00th=[ 3654], 00:30:16.572 | 70.00th=[ 3654], 80.00th=[ 3654], 90.00th=[ 3720], 95.00th=[ 3785], 00:30:16.572 | 99.00th=[ 3982], 99.50th=[ 4178], 99.90th=[ 4883], 99.95th=[ 5014], 00:30:16.572 | 99.99th=[ 7111] 00:30:16.572 bw ( KiB/s): min=17280, max=17584, per=25.08%, avg=17372.11, stdev=119.65, samples=9 00:30:16.572 iops : min= 2160, max= 2198, avg=2171.44, stdev=15.01, samples=9 00:30:16.572 lat (msec) : 2=0.54%, 4=98.50%, 10=0.96% 00:30:16.572 cpu : usr=91.84%, sys=6.96%, ctx=11, majf=0, minf=1637 00:30:16.572 IO depths : 1=11.2%, 2=24.3%, 4=50.6%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:16.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.572 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.572 issued rwts: total=10854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.572 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:16.572 filename1: (groupid=0, jobs=1): err= 0: pid=93398: Fri Apr 26 14:12:55 2024 00:30:16.572 read: IOPS=2165, BW=16.9MiB/s (17.7MB/s)(84.6MiB/5002msec) 00:30:16.572 slat (usec): min=6, max=101, avg=14.30, stdev= 4.96 00:30:16.572 clat (usec): min=1853, max=5731, avg=3627.56, stdev=151.08 00:30:16.572 lat (usec): min=1860, max=5754, avg=3641.86, stdev=150.95 00:30:16.572 clat percentiles (usec): 00:30:16.572 | 1.00th=[ 3458], 5.00th=[ 3523], 10.00th=[ 3556], 20.00th=[ 3589], 00:30:16.572 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3621], 60.00th=[ 3621], 00:30:16.572 | 70.00th=[ 3654], 80.00th=[ 3654], 90.00th=[ 3720], 95.00th=[ 3785], 00:30:16.572 | 99.00th=[ 4015], 99.50th=[ 4359], 99.90th=[ 4948], 99.95th=[ 5014], 00:30:16.572 | 99.99th=[ 5342] 00:30:16.572 bw ( KiB/s): min=17024, max=17536, per=25.01%, avg=17322.67, stdev=143.11, samples=9 00:30:16.572 iops : min= 2128, max= 2192, avg=2165.33, stdev=17.89, samples=9 00:30:16.572 lat (msec) : 2=0.18%, 4=98.74%, 10=1.08% 00:30:16.572 cpu : usr=92.36%, sys=6.44%, ctx=31, majf=0, minf=1637 00:30:16.572 IO depths : 1=12.1%, 2=25.0%, 4=50.0%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:16.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.572 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:16.572 issued rwts: total=10832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:16.572 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:16.572 00:30:16.572 Run status group 0 (all jobs): 00:30:16.572 READ: bw=67.6MiB/s (70.9MB/s), 16.9MiB/s-16.9MiB/s (17.7MB/s-17.8MB/s), io=338MiB (355MB), run=5002-5003msec 00:30:17.508 ----------------------------------------------------- 00:30:17.508 Suppressions used: 00:30:17.508 count bytes template 00:30:17.508 6 52 /usr/src/fio/parse.c 00:30:17.508 1 8 libtcmalloc_minimal.so 00:30:17.508 1 904 libcrypto.so 00:30:17.508 ----------------------------------------------------- 00:30:17.508 00:30:17.508 14:12:56 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:17.508 14:12:56 -- target/dif.sh@43 -- # local sub 00:30:17.508 14:12:56 -- target/dif.sh@45 -- # for sub in "$@" 00:30:17.508 14:12:56 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:17.508 14:12:56 -- target/dif.sh@36 -- # local sub_id=0 00:30:17.508 14:12:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:17.508 14:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.508 14:12:56 -- common/autotest_common.sh@10 -- # set +x 00:30:17.508 14:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.508 14:12:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:17.508 14:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.508 14:12:56 -- common/autotest_common.sh@10 -- # set +x 00:30:17.508 14:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.508 14:12:56 -- target/dif.sh@45 -- # for sub in "$@" 00:30:17.508 14:12:56 -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:17.508 14:12:56 -- target/dif.sh@36 -- # local sub_id=1 00:30:17.508 14:12:56 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:17.508 14:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.508 14:12:56 -- common/autotest_common.sh@10 -- # set +x 00:30:17.508 14:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.508 14:12:56 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:17.508 14:12:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.508 14:12:56 -- common/autotest_common.sh@10 -- # set +x 00:30:17.508 14:12:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.508 00:30:17.508 real 0m28.353s 00:30:17.508 user 2m9.606s 00:30:17.508 sys 0m7.407s 00:30:17.508 14:12:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:17.508 ************************************ 00:30:17.508 14:12:56 -- common/autotest_common.sh@10 -- # set +x 00:30:17.508 END TEST fio_dif_rand_params 00:30:17.508 ************************************ 00:30:17.508 14:12:57 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:17.508 14:12:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:17.508 14:12:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:17.508 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:30:17.508 ************************************ 00:30:17.508 START TEST fio_dif_digest 00:30:17.508 ************************************ 00:30:17.508 14:12:57 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:30:17.508 14:12:57 -- target/dif.sh@123 -- # local NULL_DIF 00:30:17.508 14:12:57 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:17.508 14:12:57 -- target/dif.sh@125 -- # local hdgst ddgst 00:30:17.508 14:12:57 -- target/dif.sh@127 -- # NULL_DIF=3 00:30:17.508 14:12:57 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:17.508 14:12:57 -- target/dif.sh@127 -- # numjobs=3 00:30:17.508 14:12:57 -- target/dif.sh@127 -- # iodepth=3 00:30:17.508 14:12:57 -- target/dif.sh@127 -- # runtime=10 00:30:17.508 14:12:57 -- target/dif.sh@128 -- # hdgst=true 00:30:17.508 14:12:57 -- target/dif.sh@128 -- # ddgst=true 00:30:17.508 14:12:57 -- target/dif.sh@130 -- # create_subsystems 0 00:30:17.508 14:12:57 -- target/dif.sh@28 -- # local sub 00:30:17.508 14:12:57 -- target/dif.sh@30 -- # for sub in "$@" 00:30:17.508 14:12:57 -- target/dif.sh@31 -- # create_subsystem 0 00:30:17.508 14:12:57 -- target/dif.sh@18 -- # local sub_id=0 00:30:17.508 14:12:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:17.508 14:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.508 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:30:17.508 bdev_null0 00:30:17.508 14:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.508 14:12:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:17.508 14:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.508 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:30:17.508 14:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.508 14:12:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:17.508 14:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.508 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:30:17.508 14:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.508 14:12:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:17.508 14:12:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:17.508 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:30:17.508 [2024-04-26 14:12:57.161991] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.508 14:12:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:17.508 14:12:57 -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:17.509 14:12:57 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:17.509 14:12:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:17.509 14:12:57 -- nvmf/common.sh@521 -- # config=() 00:30:17.509 14:12:57 -- nvmf/common.sh@521 -- # local subsystem config 00:30:17.509 14:12:57 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:30:17.509 14:12:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:17.509 14:12:57 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:30:17.509 { 00:30:17.509 "params": { 00:30:17.509 "name": "Nvme$subsystem", 00:30:17.509 "trtype": "$TEST_TRANSPORT", 00:30:17.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.509 "adrfam": "ipv4", 00:30:17.509 "trsvcid": "$NVMF_PORT", 00:30:17.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.509 "hdgst": ${hdgst:-false}, 00:30:17.509 "ddgst": ${ddgst:-false} 00:30:17.509 }, 00:30:17.509 "method": "bdev_nvme_attach_controller" 00:30:17.509 } 00:30:17.509 EOF 00:30:17.509 )") 00:30:17.509 14:12:57 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:17.509 14:12:57 -- target/dif.sh@82 -- # gen_fio_conf 00:30:17.509 14:12:57 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:30:17.509 14:12:57 -- target/dif.sh@54 -- # local file 00:30:17.509 14:12:57 -- target/dif.sh@56 -- # cat 00:30:17.509 14:12:57 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:17.509 14:12:57 -- common/autotest_common.sh@1325 -- # local sanitizers 00:30:17.509 14:12:57 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:17.509 14:12:57 -- common/autotest_common.sh@1327 -- # shift 00:30:17.509 14:12:57 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:30:17.509 14:12:57 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:30:17.509 14:12:57 -- nvmf/common.sh@543 -- # cat 00:30:17.509 14:12:57 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:17.509 14:12:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:17.509 14:12:57 -- common/autotest_common.sh@1331 -- # grep libasan 00:30:17.509 14:12:57 -- target/dif.sh@72 -- # (( file <= files )) 00:30:17.509 14:12:57 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:30:17.509 14:12:57 -- nvmf/common.sh@545 -- # jq . 00:30:17.767 14:12:57 -- nvmf/common.sh@546 -- # IFS=, 00:30:17.767 14:12:57 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:30:17.767 "params": { 00:30:17.767 "name": "Nvme0", 00:30:17.767 "trtype": "tcp", 00:30:17.767 "traddr": "10.0.0.2", 00:30:17.767 "adrfam": "ipv4", 00:30:17.767 "trsvcid": "4420", 00:30:17.767 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:17.767 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:17.767 "hdgst": true, 00:30:17.767 "ddgst": true 00:30:17.767 }, 00:30:17.767 "method": "bdev_nvme_attach_controller" 00:30:17.767 }' 00:30:17.767 14:12:57 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:17.767 14:12:57 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:17.767 14:12:57 -- common/autotest_common.sh@1333 -- # break 00:30:17.767 14:12:57 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:17.767 14:12:57 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:17.767 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:17.767 ... 00:30:17.767 fio-3.35 00:30:17.767 Starting 3 threads 00:30:29.964 00:30:29.964 filename0: (groupid=0, jobs=1): err= 0: pid=93513: Fri Apr 26 14:13:08 2024 00:30:29.964 read: IOPS=264, BW=33.1MiB/s (34.7MB/s)(331MiB/10004msec) 00:30:29.964 slat (nsec): min=7206, max=66626, avg=16493.02, stdev=5324.13 00:30:29.964 clat (usec): min=8515, max=53540, avg=11300.11, stdev=3470.24 00:30:29.964 lat (usec): min=8534, max=53557, avg=11316.61, stdev=3470.44 00:30:29.964 clat percentiles (usec): 00:30:29.964 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:30:29.964 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:30:29.964 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:30:29.964 | 99.00th=[13960], 99.50th=[51643], 99.90th=[52691], 99.95th=[53216], 00:30:29.964 | 99.99th=[53740] 00:30:29.964 bw ( KiB/s): min=31744, max=35840, per=38.50%, avg=33869.21, stdev=1397.91, samples=19 00:30:29.964 iops : min= 248, max= 280, avg=264.58, stdev=10.91, samples=19 00:30:29.964 lat (msec) : 10=11.17%, 20=88.16%, 100=0.68% 00:30:29.964 cpu : usr=91.56%, sys=7.08%, ctx=123, majf=0, minf=1637 00:30:29.964 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:29.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.964 issued rwts: total=2651,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.964 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:29.964 filename0: (groupid=0, jobs=1): err= 0: pid=93514: Fri Apr 26 14:13:08 2024 00:30:29.964 read: IOPS=195, BW=24.5MiB/s (25.7MB/s)(245MiB/10004msec) 00:30:29.964 slat (nsec): min=7034, max=91508, avg=16258.45, stdev=6710.81 00:30:29.964 clat (usec): min=5115, max=19927, avg=15294.69, stdev=1383.23 00:30:29.964 lat (usec): min=5135, max=19940, avg=15310.94, stdev=1384.67 00:30:29.964 clat percentiles (usec): 00:30:29.964 | 1.00th=[ 9241], 5.00th=[13960], 10.00th=[14484], 20.00th=[14877], 00:30:29.964 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15401], 60.00th=[15664], 00:30:29.964 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16450], 95.00th=[16909], 00:30:29.964 | 99.00th=[17957], 99.50th=[18220], 99.90th=[19268], 99.95th=[20055], 00:30:29.964 | 99.99th=[20055] 00:30:29.964 bw ( KiB/s): min=23040, max=26368, per=28.52%, avg=25088.00, stdev=822.92, samples=19 00:30:29.964 iops : min= 180, max= 206, avg=196.00, stdev= 6.43, samples=19 00:30:29.964 lat (msec) : 10=2.91%, 20=97.09% 00:30:29.964 cpu : usr=91.67%, sys=7.14%, ctx=16, majf=0, minf=1635 00:30:29.964 IO depths : 1=9.9%, 2=90.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:29.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.964 issued rwts: total=1959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.964 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:29.964 filename0: (groupid=0, jobs=1): err= 0: pid=93515: Fri Apr 26 14:13:08 2024 00:30:29.964 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(283MiB/10003msec) 00:30:29.964 slat (nsec): min=7186, max=94260, avg=16642.85, stdev=5939.46 00:30:29.964 clat (usec): min=5075, max=19261, avg=13226.98, stdev=1550.37 00:30:29.964 lat (usec): min=5092, max=19282, avg=13243.62, stdev=1551.33 00:30:29.964 clat percentiles (usec): 00:30:29.964 | 1.00th=[ 7570], 5.00th=[11076], 10.00th=[11863], 20.00th=[12387], 00:30:29.964 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13435], 60.00th=[13698], 00:30:29.964 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14746], 95.00th=[15270], 00:30:29.964 | 99.00th=[16581], 99.50th=[17171], 99.90th=[17957], 99.95th=[18220], 00:30:29.964 | 99.99th=[19268] 00:30:29.964 bw ( KiB/s): min=26112, max=30976, per=32.98%, avg=29011.79, stdev=1162.44, samples=19 00:30:29.964 iops : min= 204, max= 242, avg=226.63, stdev= 9.09, samples=19 00:30:29.964 lat (msec) : 10=4.02%, 20=95.98% 00:30:29.964 cpu : usr=91.18%, sys=7.22%, ctx=88, majf=0, minf=1637 00:30:29.964 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:29.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.964 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:29.964 issued rwts: total=2265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:29.964 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:29.964 00:30:29.964 Run status group 0 (all jobs): 00:30:29.964 READ: bw=85.9MiB/s (90.1MB/s), 24.5MiB/s-33.1MiB/s (25.7MB/s-34.7MB/s), io=859MiB (901MB), run=10003-10004msec 00:30:29.964 ----------------------------------------------------- 00:30:29.964 Suppressions used: 00:30:29.964 count bytes template 00:30:29.964 5 44 /usr/src/fio/parse.c 00:30:29.964 1 8 libtcmalloc_minimal.so 00:30:29.964 1 904 libcrypto.so 00:30:29.964 ----------------------------------------------------- 00:30:29.964 00:30:29.964 14:13:09 -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:29.964 14:13:09 -- target/dif.sh@43 -- # local sub 00:30:29.964 14:13:09 -- target/dif.sh@45 -- # for sub in "$@" 00:30:29.964 14:13:09 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:29.964 14:13:09 -- target/dif.sh@36 -- # local sub_id=0 00:30:29.964 14:13:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:29.964 14:13:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:29.964 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:30:30.225 14:13:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.225 14:13:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:30.225 14:13:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:30.225 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:30:30.225 14:13:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:30.225 00:30:30.225 real 0m12.529s 00:30:30.225 user 0m29.545s 00:30:30.225 sys 0m2.581s 00:30:30.225 14:13:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:30.225 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:30:30.225 ************************************ 00:30:30.225 END TEST fio_dif_digest 00:30:30.225 ************************************ 00:30:30.225 14:13:09 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:30.225 14:13:09 -- target/dif.sh@147 -- # nvmftestfini 00:30:30.225 14:13:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:30.225 14:13:09 -- nvmf/common.sh@117 -- # sync 00:30:30.225 14:13:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:30.225 14:13:09 -- nvmf/common.sh@120 -- # set +e 00:30:30.225 14:13:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:30.225 14:13:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:30.225 rmmod nvme_tcp 00:30:30.225 rmmod nvme_fabrics 00:30:30.225 rmmod nvme_keyring 00:30:30.225 14:13:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:30.225 14:13:09 -- nvmf/common.sh@124 -- # set -e 00:30:30.225 14:13:09 -- nvmf/common.sh@125 -- # return 0 00:30:30.225 14:13:09 -- nvmf/common.sh@478 -- # '[' -n 92690 ']' 00:30:30.225 14:13:09 -- nvmf/common.sh@479 -- # killprocess 92690 00:30:30.225 14:13:09 -- common/autotest_common.sh@936 -- # '[' -z 92690 ']' 00:30:30.225 14:13:09 -- common/autotest_common.sh@940 -- # kill -0 92690 00:30:30.225 14:13:09 -- common/autotest_common.sh@941 -- # uname 00:30:30.225 14:13:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:30.225 14:13:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92690 00:30:30.225 14:13:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:30.225 14:13:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:30.225 14:13:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92690' 00:30:30.225 killing process with pid 92690 00:30:30.225 14:13:09 -- common/autotest_common.sh@955 -- # kill 92690 00:30:30.225 14:13:09 -- common/autotest_common.sh@960 -- # wait 92690 00:30:31.599 14:13:11 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:30:31.599 14:13:11 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:32.175 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:32.175 Waiting for block devices as requested 00:30:32.175 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:32.450 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:32.450 14:13:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:32.450 14:13:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:32.450 14:13:11 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:32.450 14:13:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:32.451 14:13:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.451 14:13:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:32.451 14:13:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.451 14:13:11 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:32.451 00:30:32.451 real 1m11.219s 00:30:32.451 user 4m9.121s 00:30:32.451 sys 0m19.827s 00:30:32.451 14:13:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:32.451 14:13:11 -- common/autotest_common.sh@10 -- # set +x 00:30:32.451 ************************************ 00:30:32.451 END TEST nvmf_dif 00:30:32.451 ************************************ 00:30:32.451 14:13:12 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:32.451 14:13:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:32.451 14:13:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:32.451 14:13:12 -- common/autotest_common.sh@10 -- # set +x 00:30:32.710 ************************************ 00:30:32.710 START TEST nvmf_abort_qd_sizes 00:30:32.710 ************************************ 00:30:32.710 14:13:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:32.710 * Looking for test storage... 00:30:32.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:30:32.710 14:13:12 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:32.710 14:13:12 -- nvmf/common.sh@7 -- # uname -s 00:30:32.710 14:13:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:32.710 14:13:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:32.710 14:13:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:32.710 14:13:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:32.710 14:13:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:32.710 14:13:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:32.710 14:13:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:32.710 14:13:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:32.710 14:13:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:32.710 14:13:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:32.710 14:13:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:30:32.710 14:13:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:30:32.710 14:13:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:32.710 14:13:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:32.710 14:13:12 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:32.710 14:13:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:32.710 14:13:12 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:32.710 14:13:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:32.710 14:13:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:32.710 14:13:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:32.710 14:13:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.710 14:13:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.710 14:13:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.710 14:13:12 -- paths/export.sh@5 -- # export PATH 00:30:32.710 14:13:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:32.710 14:13:12 -- nvmf/common.sh@47 -- # : 0 00:30:32.710 14:13:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:32.710 14:13:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:32.710 14:13:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:32.710 14:13:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:32.710 14:13:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:32.710 14:13:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:32.710 14:13:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:32.710 14:13:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:32.710 14:13:12 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:32.710 14:13:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:32.710 14:13:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:32.710 14:13:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:32.710 14:13:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:32.710 14:13:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:32.710 14:13:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.711 14:13:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:32.711 14:13:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.711 14:13:12 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:30:32.711 14:13:12 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:30:32.711 14:13:12 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:30:32.711 14:13:12 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:30:32.711 14:13:12 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:30:32.711 14:13:12 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:30:32.711 14:13:12 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.711 14:13:12 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.711 14:13:12 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:32.711 14:13:12 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:32.711 14:13:12 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:32.711 14:13:12 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:32.711 14:13:12 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:32.711 14:13:12 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.711 14:13:12 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:32.711 14:13:12 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:32.711 14:13:12 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:32.711 14:13:12 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:32.711 14:13:12 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:32.711 14:13:12 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:32.711 Cannot find device "nvmf_tgt_br" 00:30:32.711 14:13:12 -- nvmf/common.sh@155 -- # true 00:30:32.711 14:13:12 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:32.711 Cannot find device "nvmf_tgt_br2" 00:30:32.711 14:13:12 -- nvmf/common.sh@156 -- # true 00:30:32.711 14:13:12 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:32.970 14:13:12 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:32.970 Cannot find device "nvmf_tgt_br" 00:30:32.970 14:13:12 -- nvmf/common.sh@158 -- # true 00:30:32.970 14:13:12 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:32.970 Cannot find device "nvmf_tgt_br2" 00:30:32.970 14:13:12 -- nvmf/common.sh@159 -- # true 00:30:32.970 14:13:12 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:32.970 14:13:12 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:32.970 14:13:12 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:32.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:32.970 14:13:12 -- nvmf/common.sh@162 -- # true 00:30:32.970 14:13:12 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:32.970 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:32.970 14:13:12 -- nvmf/common.sh@163 -- # true 00:30:32.970 14:13:12 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:32.970 14:13:12 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:32.970 14:13:12 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:32.970 14:13:12 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:32.970 14:13:12 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:32.970 14:13:12 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:32.970 14:13:12 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:32.970 14:13:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:32.970 14:13:12 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:32.970 14:13:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:32.970 14:13:12 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:32.970 14:13:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:32.970 14:13:12 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:32.970 14:13:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:32.970 14:13:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:32.970 14:13:12 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:32.970 14:13:12 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:32.970 14:13:12 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:32.970 14:13:12 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:33.228 14:13:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:33.228 14:13:12 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:33.228 14:13:12 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:33.228 14:13:12 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:33.228 14:13:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:33.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:33.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:30:33.228 00:30:33.228 --- 10.0.0.2 ping statistics --- 00:30:33.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.228 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:30:33.228 14:13:12 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:33.228 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:33.228 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:30:33.228 00:30:33.228 --- 10.0.0.3 ping statistics --- 00:30:33.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.228 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:30:33.228 14:13:12 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:33.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:33.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:30:33.228 00:30:33.228 --- 10.0.0.1 ping statistics --- 00:30:33.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:33.228 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:30:33.228 14:13:12 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:33.228 14:13:12 -- nvmf/common.sh@422 -- # return 0 00:30:33.228 14:13:12 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:30:33.228 14:13:12 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:34.163 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:34.163 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:30:34.163 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:30:34.163 14:13:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:34.163 14:13:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:34.163 14:13:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:34.163 14:13:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:34.164 14:13:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:34.164 14:13:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:34.164 14:13:13 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:34.164 14:13:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:34.164 14:13:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:34.164 14:13:13 -- common/autotest_common.sh@10 -- # set +x 00:30:34.164 14:13:13 -- nvmf/common.sh@470 -- # nvmfpid=94147 00:30:34.164 14:13:13 -- nvmf/common.sh@471 -- # waitforlisten 94147 00:30:34.164 14:13:13 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:34.164 14:13:13 -- common/autotest_common.sh@817 -- # '[' -z 94147 ']' 00:30:34.164 14:13:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.164 14:13:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:34.164 14:13:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.164 14:13:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:34.164 14:13:13 -- common/autotest_common.sh@10 -- # set +x 00:30:34.422 [2024-04-26 14:13:13.911933] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:30:34.422 [2024-04-26 14:13:13.912051] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.422 [2024-04-26 14:13:14.083644] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:34.679 [2024-04-26 14:13:14.331202] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:34.679 [2024-04-26 14:13:14.331273] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:34.679 [2024-04-26 14:13:14.331291] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:34.679 [2024-04-26 14:13:14.331304] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:34.679 [2024-04-26 14:13:14.331319] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:34.679 [2024-04-26 14:13:14.332368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.679 [2024-04-26 14:13:14.332426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:34.679 [2024-04-26 14:13:14.332544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.679 [2024-04-26 14:13:14.332581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:35.246 14:13:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:35.246 14:13:14 -- common/autotest_common.sh@850 -- # return 0 00:30:35.246 14:13:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:35.246 14:13:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:35.246 14:13:14 -- common/autotest_common.sh@10 -- # set +x 00:30:35.246 14:13:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.246 14:13:14 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:35.246 14:13:14 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:35.246 14:13:14 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:35.246 14:13:14 -- scripts/common.sh@309 -- # local bdf bdfs 00:30:35.246 14:13:14 -- scripts/common.sh@310 -- # local nvmes 00:30:35.246 14:13:14 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:30:35.246 14:13:14 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:30:35.246 14:13:14 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:30:35.246 14:13:14 -- scripts/common.sh@295 -- # local bdf= 00:30:35.246 14:13:14 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:30:35.246 14:13:14 -- scripts/common.sh@230 -- # local class 00:30:35.246 14:13:14 -- scripts/common.sh@231 -- # local subclass 00:30:35.246 14:13:14 -- scripts/common.sh@232 -- # local progif 00:30:35.246 14:13:14 -- scripts/common.sh@233 -- # printf %02x 1 00:30:35.246 14:13:14 -- scripts/common.sh@233 -- # class=01 00:30:35.246 14:13:14 -- scripts/common.sh@234 -- # printf %02x 8 00:30:35.246 14:13:14 -- scripts/common.sh@234 -- # subclass=08 00:30:35.246 14:13:14 -- scripts/common.sh@235 -- # printf %02x 2 00:30:35.246 14:13:14 -- scripts/common.sh@235 -- # progif=02 00:30:35.246 14:13:14 -- scripts/common.sh@237 -- # hash lspci 00:30:35.246 14:13:14 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:30:35.246 14:13:14 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:30:35.246 14:13:14 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:30:35.246 14:13:14 -- scripts/common.sh@240 -- # grep -i -- -p02 00:30:35.246 14:13:14 -- scripts/common.sh@242 -- # tr -d '"' 00:30:35.246 14:13:14 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:35.246 14:13:14 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:30:35.246 14:13:14 -- scripts/common.sh@15 -- # local i 00:30:35.246 14:13:14 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:30:35.246 14:13:14 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:35.246 14:13:14 -- scripts/common.sh@24 -- # return 0 00:30:35.246 14:13:14 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:30:35.246 14:13:14 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:30:35.246 14:13:14 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:30:35.246 14:13:14 -- scripts/common.sh@15 -- # local i 00:30:35.246 14:13:14 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:30:35.246 14:13:14 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:30:35.246 14:13:14 -- scripts/common.sh@24 -- # return 0 00:30:35.246 14:13:14 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:30:35.246 14:13:14 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:35.246 14:13:14 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:30:35.246 14:13:14 -- scripts/common.sh@320 -- # uname -s 00:30:35.246 14:13:14 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:35.246 14:13:14 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:35.246 14:13:14 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:35.246 14:13:14 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:30:35.246 14:13:14 -- scripts/common.sh@320 -- # uname -s 00:30:35.246 14:13:14 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:35.246 14:13:14 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:35.246 14:13:14 -- scripts/common.sh@325 -- # (( 2 )) 00:30:35.246 14:13:14 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:30:35.246 14:13:14 -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:30:35.246 14:13:14 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:30:35.246 14:13:14 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:35.246 14:13:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:35.246 14:13:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:35.246 14:13:14 -- common/autotest_common.sh@10 -- # set +x 00:30:35.505 ************************************ 00:30:35.505 START TEST spdk_target_abort 00:30:35.505 ************************************ 00:30:35.505 14:13:14 -- common/autotest_common.sh@1111 -- # spdk_target 00:30:35.505 14:13:14 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:35.505 14:13:14 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:30:35.505 14:13:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.505 14:13:14 -- common/autotest_common.sh@10 -- # set +x 00:30:35.505 spdk_targetn1 00:30:35.505 14:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:35.505 14:13:15 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:35.505 14:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.505 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:30:35.505 [2024-04-26 14:13:15.026311] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.505 14:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:35.505 14:13:15 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:35.505 14:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.505 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:30:35.505 14:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:35.505 14:13:15 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:35.505 14:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.505 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:30:35.505 14:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:35.505 14:13:15 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:35.505 14:13:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:35.505 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:30:35.505 [2024-04-26 14:13:15.071832] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.505 14:13:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:35.505 14:13:15 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:35.505 14:13:15 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:35.505 14:13:15 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:35.505 14:13:15 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:35.506 14:13:15 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:35.506 14:13:15 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:35.506 14:13:15 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:35.506 14:13:15 -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:35.506 14:13:15 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:35.506 14:13:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:35.506 14:13:15 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:35.506 14:13:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:35.506 14:13:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:35.506 14:13:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:35.506 14:13:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:35.506 14:13:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:35.506 14:13:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:35.506 14:13:15 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:35.506 14:13:15 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:35.506 14:13:15 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:35.506 14:13:15 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:38.795 Initializing NVMe Controllers 00:30:38.795 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:38.795 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:38.795 Initialization complete. Launching workers. 00:30:38.795 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11194, failed: 0 00:30:38.795 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1068, failed to submit 10126 00:30:38.795 success 755, unsuccess 313, failed 0 00:30:38.795 14:13:18 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:38.795 14:13:18 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:42.995 Initializing NVMe Controllers 00:30:42.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:42.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:42.995 Initialization complete. Launching workers. 00:30:42.995 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5999, failed: 0 00:30:42.995 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1300, failed to submit 4699 00:30:42.995 success 256, unsuccess 1044, failed 0 00:30:42.995 14:13:21 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:42.995 14:13:21 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:45.530 Initializing NVMe Controllers 00:30:45.530 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:45.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:45.530 Initialization complete. Launching workers. 00:30:45.530 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29888, failed: 0 00:30:45.530 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2713, failed to submit 27175 00:30:45.530 success 400, unsuccess 2313, failed 0 00:30:45.530 14:13:25 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:45.530 14:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:45.530 14:13:25 -- common/autotest_common.sh@10 -- # set +x 00:30:45.530 14:13:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:45.530 14:13:25 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:45.530 14:13:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:45.530 14:13:25 -- common/autotest_common.sh@10 -- # set +x 00:30:46.909 14:13:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:46.909 14:13:26 -- target/abort_qd_sizes.sh@61 -- # killprocess 94147 00:30:46.909 14:13:26 -- common/autotest_common.sh@936 -- # '[' -z 94147 ']' 00:30:46.909 14:13:26 -- common/autotest_common.sh@940 -- # kill -0 94147 00:30:46.909 14:13:26 -- common/autotest_common.sh@941 -- # uname 00:30:46.909 14:13:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:46.909 14:13:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94147 00:30:46.909 14:13:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:46.909 killing process with pid 94147 00:30:46.909 14:13:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:46.910 14:13:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94147' 00:30:46.910 14:13:26 -- common/autotest_common.sh@955 -- # kill 94147 00:30:46.910 14:13:26 -- common/autotest_common.sh@960 -- # wait 94147 00:30:47.887 00:30:47.887 real 0m12.356s 00:30:47.887 user 0m47.399s 00:30:47.887 sys 0m2.339s 00:30:47.887 14:13:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:47.887 ************************************ 00:30:47.887 END TEST spdk_target_abort 00:30:47.887 ************************************ 00:30:47.887 14:13:27 -- common/autotest_common.sh@10 -- # set +x 00:30:47.887 14:13:27 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:47.887 14:13:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:30:47.887 14:13:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:47.887 14:13:27 -- common/autotest_common.sh@10 -- # set +x 00:30:47.887 ************************************ 00:30:47.887 START TEST kernel_target_abort 00:30:47.887 ************************************ 00:30:47.887 14:13:27 -- common/autotest_common.sh@1111 -- # kernel_target 00:30:47.887 14:13:27 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:47.887 14:13:27 -- nvmf/common.sh@717 -- # local ip 00:30:47.887 14:13:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:30:47.887 14:13:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:30:47.887 14:13:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:47.887 14:13:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:47.887 14:13:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:30:47.887 14:13:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:47.887 14:13:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:30:47.887 14:13:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:30:47.887 14:13:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:30:47.887 14:13:27 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:47.887 14:13:27 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:47.887 14:13:27 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:30:47.887 14:13:27 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:47.887 14:13:27 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:47.887 14:13:27 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:47.887 14:13:27 -- nvmf/common.sh@628 -- # local block nvme 00:30:47.887 14:13:27 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:30:47.887 14:13:27 -- nvmf/common.sh@631 -- # modprobe nvmet 00:30:47.887 14:13:27 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:47.887 14:13:27 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:48.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:48.455 Waiting for block devices as requested 00:30:48.455 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:48.713 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:49.650 14:13:29 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:49.650 14:13:29 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:30:49.650 14:13:29 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:30:49.650 14:13:29 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:30:49.650 14:13:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:30:49.650 14:13:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:49.650 14:13:29 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:30:49.650 14:13:29 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:30:49.650 14:13:29 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:30:49.650 No valid GPT data, bailing 00:30:49.650 14:13:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:49.650 14:13:29 -- scripts/common.sh@391 -- # pt= 00:30:49.650 14:13:29 -- scripts/common.sh@392 -- # return 1 00:30:49.650 14:13:29 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:30:49.650 14:13:29 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:49.650 14:13:29 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:30:49.650 14:13:29 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:30:49.650 14:13:29 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:30:49.650 14:13:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:30:49.650 14:13:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:49.650 14:13:29 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:30:49.650 14:13:29 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:30:49.650 14:13:29 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:30:49.650 No valid GPT data, bailing 00:30:49.650 14:13:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:30:49.650 14:13:29 -- scripts/common.sh@391 -- # pt= 00:30:49.650 14:13:29 -- scripts/common.sh@392 -- # return 1 00:30:49.650 14:13:29 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:30:49.650 14:13:29 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:49.650 14:13:29 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:30:49.650 14:13:29 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:30:49.650 14:13:29 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:30:49.650 14:13:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:30:49.650 14:13:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:49.650 14:13:29 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:30:49.650 14:13:29 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:30:49.650 14:13:29 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:30:49.650 No valid GPT data, bailing 00:30:49.650 14:13:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:30:49.650 14:13:29 -- scripts/common.sh@391 -- # pt= 00:30:49.650 14:13:29 -- scripts/common.sh@392 -- # return 1 00:30:49.650 14:13:29 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:30:49.650 14:13:29 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:30:49.650 14:13:29 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:30:49.650 14:13:29 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:30:49.650 14:13:29 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:30:49.650 14:13:29 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:30:49.650 14:13:29 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:30:49.650 14:13:29 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:30:49.650 14:13:29 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:30:49.650 14:13:29 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:30:49.909 No valid GPT data, bailing 00:30:49.909 14:13:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:30:49.909 14:13:29 -- scripts/common.sh@391 -- # pt= 00:30:49.909 14:13:29 -- scripts/common.sh@392 -- # return 1 00:30:49.909 14:13:29 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:30:49.909 14:13:29 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:30:49.909 14:13:29 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:49.909 14:13:29 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:49.909 14:13:29 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:30:49.909 14:13:29 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:30:49.909 14:13:29 -- nvmf/common.sh@656 -- # echo 1 00:30:49.909 14:13:29 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:30:49.909 14:13:29 -- nvmf/common.sh@658 -- # echo 1 00:30:49.909 14:13:29 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:30:49.909 14:13:29 -- nvmf/common.sh@661 -- # echo tcp 00:30:49.909 14:13:29 -- nvmf/common.sh@662 -- # echo 4420 00:30:49.909 14:13:29 -- nvmf/common.sh@663 -- # echo ipv4 00:30:49.909 14:13:29 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:30:49.909 14:13:29 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 --hostid=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 -a 10.0.0.1 -t tcp -s 4420 00:30:49.909 00:30:49.909 Discovery Log Number of Records 2, Generation counter 2 00:30:49.909 =====Discovery Log Entry 0====== 00:30:49.909 trtype: tcp 00:30:49.909 adrfam: ipv4 00:30:49.909 subtype: current discovery subsystem 00:30:49.909 treq: not specified, sq flow control disable supported 00:30:49.909 portid: 1 00:30:49.909 trsvcid: 4420 00:30:49.909 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:30:49.909 traddr: 10.0.0.1 00:30:49.909 eflags: none 00:30:49.909 sectype: none 00:30:49.909 =====Discovery Log Entry 1====== 00:30:49.909 trtype: tcp 00:30:49.909 adrfam: ipv4 00:30:49.909 subtype: nvme subsystem 00:30:49.909 treq: not specified, sq flow control disable supported 00:30:49.909 portid: 1 00:30:49.909 trsvcid: 4420 00:30:49.909 subnqn: nqn.2016-06.io.spdk:testnqn 00:30:49.909 traddr: 10.0.0.1 00:30:49.909 eflags: none 00:30:49.909 sectype: none 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:49.909 14:13:29 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:53.196 Initializing NVMe Controllers 00:30:53.196 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:53.196 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:53.196 Initialization complete. Launching workers. 00:30:53.196 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35028, failed: 0 00:30:53.196 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35028, failed to submit 0 00:30:53.196 success 0, unsuccess 35028, failed 0 00:30:53.196 14:13:32 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:53.196 14:13:32 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:56.515 Initializing NVMe Controllers 00:30:56.515 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:56.515 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:56.515 Initialization complete. Launching workers. 00:30:56.515 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72438, failed: 0 00:30:56.515 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32842, failed to submit 39596 00:30:56.515 success 0, unsuccess 32842, failed 0 00:30:56.515 14:13:35 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:56.515 14:13:35 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:59.803 Initializing NVMe Controllers 00:30:59.803 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:59.803 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:59.803 Initialization complete. Launching workers. 00:30:59.803 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98197, failed: 0 00:30:59.803 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24542, failed to submit 73655 00:30:59.803 success 0, unsuccess 24542, failed 0 00:30:59.803 14:13:39 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:59.803 14:13:39 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:59.803 14:13:39 -- nvmf/common.sh@675 -- # echo 0 00:30:59.803 14:13:39 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:59.803 14:13:39 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:59.803 14:13:39 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:59.803 14:13:39 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:59.803 14:13:39 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:30:59.803 14:13:39 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:30:59.803 14:13:39 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:00.739 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:03.274 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:31:03.274 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:31:03.274 00:31:03.274 real 0m15.398s 00:31:03.274 user 0m7.028s 00:31:03.274 sys 0m5.967s 00:31:03.274 14:13:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:03.274 ************************************ 00:31:03.274 END TEST kernel_target_abort 00:31:03.274 ************************************ 00:31:03.274 14:13:42 -- common/autotest_common.sh@10 -- # set +x 00:31:03.274 14:13:42 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:03.274 14:13:42 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:03.274 14:13:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:03.274 14:13:42 -- nvmf/common.sh@117 -- # sync 00:31:03.274 14:13:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:03.274 14:13:42 -- nvmf/common.sh@120 -- # set +e 00:31:03.274 14:13:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:03.274 14:13:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:03.274 rmmod nvme_tcp 00:31:03.532 rmmod nvme_fabrics 00:31:03.532 rmmod nvme_keyring 00:31:03.532 14:13:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:03.532 14:13:42 -- nvmf/common.sh@124 -- # set -e 00:31:03.532 14:13:42 -- nvmf/common.sh@125 -- # return 0 00:31:03.532 14:13:42 -- nvmf/common.sh@478 -- # '[' -n 94147 ']' 00:31:03.532 14:13:42 -- nvmf/common.sh@479 -- # killprocess 94147 00:31:03.532 14:13:43 -- common/autotest_common.sh@936 -- # '[' -z 94147 ']' 00:31:03.532 Process with pid 94147 is not found 00:31:03.532 14:13:43 -- common/autotest_common.sh@940 -- # kill -0 94147 00:31:03.532 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (94147) - No such process 00:31:03.532 14:13:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 94147 is not found' 00:31:03.532 14:13:43 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:31:03.532 14:13:43 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:04.098 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:04.098 Waiting for block devices as requested 00:31:04.098 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:04.098 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:04.357 14:13:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:04.357 14:13:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:04.357 14:13:43 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:04.357 14:13:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:04.357 14:13:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:04.357 14:13:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:04.357 14:13:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:04.357 14:13:43 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:04.357 00:31:04.357 real 0m31.685s 00:31:04.357 user 0m55.732s 00:31:04.357 sys 0m10.234s 00:31:04.357 14:13:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:04.357 14:13:43 -- common/autotest_common.sh@10 -- # set +x 00:31:04.357 ************************************ 00:31:04.357 END TEST nvmf_abort_qd_sizes 00:31:04.357 ************************************ 00:31:04.357 14:13:43 -- spdk/autotest.sh@293 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:31:04.357 14:13:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:04.357 14:13:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:04.357 14:13:43 -- common/autotest_common.sh@10 -- # set +x 00:31:04.357 ************************************ 00:31:04.357 START TEST keyring_file 00:31:04.357 ************************************ 00:31:04.357 14:13:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:31:04.616 * Looking for test storage... 00:31:04.616 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:31:04.616 14:13:44 -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:31:04.616 14:13:44 -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:04.616 14:13:44 -- nvmf/common.sh@7 -- # uname -s 00:31:04.616 14:13:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:04.616 14:13:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:04.616 14:13:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:04.616 14:13:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:04.616 14:13:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:04.616 14:13:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:04.616 14:13:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:04.616 14:13:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:04.616 14:13:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:04.616 14:13:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:04.616 14:13:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:31:04.616 14:13:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=dad207e9-ba1f-4f79-9bc5-82cb3e27c604 00:31:04.616 14:13:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:04.616 14:13:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:04.616 14:13:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:04.616 14:13:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:04.616 14:13:44 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:04.616 14:13:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:04.616 14:13:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:04.616 14:13:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:04.616 14:13:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.616 14:13:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.616 14:13:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.616 14:13:44 -- paths/export.sh@5 -- # export PATH 00:31:04.617 14:13:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:04.617 14:13:44 -- nvmf/common.sh@47 -- # : 0 00:31:04.617 14:13:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:04.617 14:13:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:04.617 14:13:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:04.617 14:13:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:04.617 14:13:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:04.617 14:13:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:04.617 14:13:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:04.617 14:13:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:04.617 14:13:44 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:04.617 14:13:44 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:04.617 14:13:44 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:04.617 14:13:44 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:04.617 14:13:44 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:04.617 14:13:44 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:04.617 14:13:44 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:04.617 14:13:44 -- keyring/common.sh@15 -- # local name key digest path 00:31:04.617 14:13:44 -- keyring/common.sh@17 -- # name=key0 00:31:04.617 14:13:44 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:04.617 14:13:44 -- keyring/common.sh@17 -- # digest=0 00:31:04.617 14:13:44 -- keyring/common.sh@18 -- # mktemp 00:31:04.617 14:13:44 -- keyring/common.sh@18 -- # path=/tmp/tmp.NJb7nczPX8 00:31:04.617 14:13:44 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:04.617 14:13:44 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:04.617 14:13:44 -- nvmf/common.sh@691 -- # local prefix key digest 00:31:04.617 14:13:44 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:31:04.617 14:13:44 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:31:04.617 14:13:44 -- nvmf/common.sh@693 -- # digest=0 00:31:04.617 14:13:44 -- nvmf/common.sh@694 -- # python - 00:31:04.617 14:13:44 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NJb7nczPX8 00:31:04.617 14:13:44 -- keyring/common.sh@23 -- # echo /tmp/tmp.NJb7nczPX8 00:31:04.617 14:13:44 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.NJb7nczPX8 00:31:04.617 14:13:44 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:04.617 14:13:44 -- keyring/common.sh@15 -- # local name key digest path 00:31:04.617 14:13:44 -- keyring/common.sh@17 -- # name=key1 00:31:04.617 14:13:44 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:04.617 14:13:44 -- keyring/common.sh@17 -- # digest=0 00:31:04.617 14:13:44 -- keyring/common.sh@18 -- # mktemp 00:31:04.617 14:13:44 -- keyring/common.sh@18 -- # path=/tmp/tmp.DGD7rmw0DI 00:31:04.617 14:13:44 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:04.617 14:13:44 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:04.617 14:13:44 -- nvmf/common.sh@691 -- # local prefix key digest 00:31:04.617 14:13:44 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:31:04.617 14:13:44 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:31:04.617 14:13:44 -- nvmf/common.sh@693 -- # digest=0 00:31:04.617 14:13:44 -- nvmf/common.sh@694 -- # python - 00:31:04.876 14:13:44 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DGD7rmw0DI 00:31:04.876 14:13:44 -- keyring/common.sh@23 -- # echo /tmp/tmp.DGD7rmw0DI 00:31:04.876 14:13:44 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.DGD7rmw0DI 00:31:04.876 14:13:44 -- keyring/file.sh@30 -- # tgtpid=95286 00:31:04.876 14:13:44 -- keyring/file.sh@32 -- # waitforlisten 95286 00:31:04.876 14:13:44 -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:04.876 14:13:44 -- common/autotest_common.sh@817 -- # '[' -z 95286 ']' 00:31:04.876 14:13:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.876 14:13:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:04.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.876 14:13:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.876 14:13:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:04.876 14:13:44 -- common/autotest_common.sh@10 -- # set +x 00:31:04.876 [2024-04-26 14:13:44.436918] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:31:04.876 [2024-04-26 14:13:44.437047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95286 ] 00:31:05.134 [2024-04-26 14:13:44.610565] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.392 [2024-04-26 14:13:44.851517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.330 14:13:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:06.330 14:13:45 -- common/autotest_common.sh@850 -- # return 0 00:31:06.330 14:13:45 -- keyring/file.sh@33 -- # rpc_cmd 00:31:06.330 14:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.330 14:13:45 -- common/autotest_common.sh@10 -- # set +x 00:31:06.330 [2024-04-26 14:13:45.841429] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:06.330 null0 00:31:06.330 [2024-04-26 14:13:45.873386] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:06.330 [2024-04-26 14:13:45.873700] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:06.330 [2024-04-26 14:13:45.881405] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:06.330 14:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:06.330 14:13:45 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:06.330 14:13:45 -- common/autotest_common.sh@638 -- # local es=0 00:31:06.330 14:13:45 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:06.330 14:13:45 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:31:06.330 14:13:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:06.330 14:13:45 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:31:06.330 14:13:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:06.330 14:13:45 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:06.330 14:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:06.330 14:13:45 -- common/autotest_common.sh@10 -- # set +x 00:31:06.330 [2024-04-26 14:13:45.897377] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:31:06.330 { 00:31:06.330 2024/04/26 14:13:45 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:31:06.330 "method": "nvmf_subsystem_add_listener", 00:31:06.330 "params": { 00:31:06.330 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:06.330 "secure_channel": false, 00:31:06.330 "listen_address": { 00:31:06.330 "trtype": "tcp", 00:31:06.330 "traddr": "127.0.0.1", 00:31:06.330 "trsvcid": "4420" 00:31:06.330 } 00:31:06.330 } 00:31:06.330 } 00:31:06.330 Got JSON-RPC error response 00:31:06.330 GoRPCClient: error on JSON-RPC call 00:31:06.330 14:13:45 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:31:06.330 14:13:45 -- common/autotest_common.sh@641 -- # es=1 00:31:06.330 14:13:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:06.330 14:13:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:06.330 14:13:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:06.330 14:13:45 -- keyring/file.sh@46 -- # bperfpid=95327 00:31:06.330 14:13:45 -- keyring/file.sh@48 -- # waitforlisten 95327 /var/tmp/bperf.sock 00:31:06.330 14:13:45 -- common/autotest_common.sh@817 -- # '[' -z 95327 ']' 00:31:06.330 14:13:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:06.330 14:13:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:06.330 14:13:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:06.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:06.330 14:13:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:06.330 14:13:45 -- common/autotest_common.sh@10 -- # set +x 00:31:06.330 14:13:45 -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:06.330 [2024-04-26 14:13:45.996598] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:31:06.330 [2024-04-26 14:13:45.997058] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95327 ] 00:31:06.590 [2024-04-26 14:13:46.155576] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.849 [2024-04-26 14:13:46.424475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.415 14:13:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:07.415 14:13:46 -- common/autotest_common.sh@850 -- # return 0 00:31:07.415 14:13:46 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NJb7nczPX8 00:31:07.415 14:13:46 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NJb7nczPX8 00:31:07.415 14:13:47 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.DGD7rmw0DI 00:31:07.415 14:13:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.DGD7rmw0DI 00:31:07.675 14:13:47 -- keyring/file.sh@51 -- # jq -r .path 00:31:07.675 14:13:47 -- keyring/file.sh@51 -- # get_key key0 00:31:07.675 14:13:47 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:07.675 14:13:47 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:07.675 14:13:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:07.933 14:13:47 -- keyring/file.sh@51 -- # [[ /tmp/tmp.NJb7nczPX8 == \/\t\m\p\/\t\m\p\.\N\J\b\7\n\c\z\P\X\8 ]] 00:31:07.933 14:13:47 -- keyring/file.sh@52 -- # get_key key1 00:31:07.933 14:13:47 -- keyring/file.sh@52 -- # jq -r .path 00:31:07.933 14:13:47 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:07.933 14:13:47 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:07.933 14:13:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:08.192 14:13:47 -- keyring/file.sh@52 -- # [[ /tmp/tmp.DGD7rmw0DI == \/\t\m\p\/\t\m\p\.\D\G\D\7\r\m\w\0\D\I ]] 00:31:08.192 14:13:47 -- keyring/file.sh@53 -- # get_refcnt key0 00:31:08.192 14:13:47 -- keyring/common.sh@12 -- # get_key key0 00:31:08.192 14:13:47 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:08.192 14:13:47 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:08.192 14:13:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:08.192 14:13:47 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:08.451 14:13:47 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:08.451 14:13:47 -- keyring/file.sh@54 -- # get_refcnt key1 00:31:08.451 14:13:47 -- keyring/common.sh@12 -- # get_key key1 00:31:08.451 14:13:47 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:08.451 14:13:47 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:08.451 14:13:47 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:08.451 14:13:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:08.710 14:13:48 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:08.710 14:13:48 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:08.710 14:13:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:08.710 [2024-04-26 14:13:48.365912] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:08.970 nvme0n1 00:31:08.970 14:13:48 -- keyring/file.sh@59 -- # get_refcnt key0 00:31:08.970 14:13:48 -- keyring/common.sh@12 -- # get_key key0 00:31:08.970 14:13:48 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:08.970 14:13:48 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:08.970 14:13:48 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:08.970 14:13:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:09.229 14:13:48 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:31:09.229 14:13:48 -- keyring/file.sh@60 -- # get_refcnt key1 00:31:09.229 14:13:48 -- keyring/common.sh@12 -- # get_key key1 00:31:09.229 14:13:48 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:09.230 14:13:48 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:09.230 14:13:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:09.230 14:13:48 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:09.495 14:13:48 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:09.495 14:13:48 -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:09.495 Running I/O for 1 seconds... 00:31:10.440 00:31:10.440 Latency(us) 00:31:10.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:10.440 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:10.440 nvme0n1 : 1.01 11945.95 46.66 0.00 0.00 10679.81 4553.30 16212.92 00:31:10.440 =================================================================================================================== 00:31:10.440 Total : 11945.95 46.66 0.00 0.00 10679.81 4553.30 16212.92 00:31:10.440 0 00:31:10.440 14:13:50 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:10.440 14:13:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:10.699 14:13:50 -- keyring/file.sh@65 -- # get_refcnt key0 00:31:10.699 14:13:50 -- keyring/common.sh@12 -- # get_key key0 00:31:10.699 14:13:50 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:10.699 14:13:50 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:10.699 14:13:50 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:10.699 14:13:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:10.958 14:13:50 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:10.958 14:13:50 -- keyring/file.sh@66 -- # get_refcnt key1 00:31:10.958 14:13:50 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:10.958 14:13:50 -- keyring/common.sh@12 -- # get_key key1 00:31:10.958 14:13:50 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:10.958 14:13:50 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:10.958 14:13:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:11.217 14:13:50 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:11.217 14:13:50 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:11.217 14:13:50 -- common/autotest_common.sh@638 -- # local es=0 00:31:11.217 14:13:50 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:11.217 14:13:50 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:31:11.217 14:13:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:11.217 14:13:50 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:31:11.217 14:13:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:11.217 14:13:50 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:11.217 14:13:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:11.476 [2024-04-26 14:13:50.901455] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:11.476 [2024-04-26 14:13:50.901478] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009640 (107): Transport endpoint is not connected 00:31:11.476 [2024-04-26 14:13:50.902442] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009640 (9): Bad file descriptor 00:31:11.476 [2024-04-26 14:13:50.903435] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:11.476 [2024-04-26 14:13:50.903470] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:11.476 [2024-04-26 14:13:50.903483] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:11.476 2024/04/26 14:13:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:31:11.476 request: 00:31:11.476 { 00:31:11.476 "method": "bdev_nvme_attach_controller", 00:31:11.476 "params": { 00:31:11.476 "name": "nvme0", 00:31:11.476 "trtype": "tcp", 00:31:11.476 "traddr": "127.0.0.1", 00:31:11.476 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:11.476 "adrfam": "ipv4", 00:31:11.476 "trsvcid": "4420", 00:31:11.476 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:11.476 "psk": "key1" 00:31:11.476 } 00:31:11.476 } 00:31:11.476 Got JSON-RPC error response 00:31:11.476 GoRPCClient: error on JSON-RPC call 00:31:11.476 14:13:50 -- common/autotest_common.sh@641 -- # es=1 00:31:11.476 14:13:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:11.476 14:13:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:11.476 14:13:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:11.476 14:13:50 -- keyring/file.sh@71 -- # get_refcnt key0 00:31:11.476 14:13:50 -- keyring/common.sh@12 -- # get_key key0 00:31:11.476 14:13:50 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:11.476 14:13:50 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:11.476 14:13:50 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:11.476 14:13:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:11.476 14:13:51 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:31:11.476 14:13:51 -- keyring/file.sh@72 -- # get_refcnt key1 00:31:11.476 14:13:51 -- keyring/common.sh@12 -- # get_key key1 00:31:11.735 14:13:51 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:11.735 14:13:51 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:11.735 14:13:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:11.735 14:13:51 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:11.735 14:13:51 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:11.735 14:13:51 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:31:11.735 14:13:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:11.993 14:13:51 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:31:11.993 14:13:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:12.252 14:13:51 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:31:12.252 14:13:51 -- keyring/file.sh@77 -- # jq length 00:31:12.252 14:13:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:12.510 14:13:51 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:31:12.510 14:13:51 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.NJb7nczPX8 00:31:12.510 14:13:51 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.NJb7nczPX8 00:31:12.510 14:13:51 -- common/autotest_common.sh@638 -- # local es=0 00:31:12.510 14:13:51 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.NJb7nczPX8 00:31:12.510 14:13:51 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:31:12.510 14:13:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:12.510 14:13:51 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:31:12.510 14:13:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:12.510 14:13:51 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NJb7nczPX8 00:31:12.511 14:13:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NJb7nczPX8 00:31:12.511 [2024-04-26 14:13:52.161765] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.NJb7nczPX8': 0100660 00:31:12.511 [2024-04-26 14:13:52.161821] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:12.511 2024/04/26 14:13:52 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.NJb7nczPX8], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:31:12.511 request: 00:31:12.511 { 00:31:12.511 "method": "keyring_file_add_key", 00:31:12.511 "params": { 00:31:12.511 "name": "key0", 00:31:12.511 "path": "/tmp/tmp.NJb7nczPX8" 00:31:12.511 } 00:31:12.511 } 00:31:12.511 Got JSON-RPC error response 00:31:12.511 GoRPCClient: error on JSON-RPC call 00:31:12.511 14:13:52 -- common/autotest_common.sh@641 -- # es=1 00:31:12.771 14:13:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:12.771 14:13:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:12.771 14:13:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:12.771 14:13:52 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.NJb7nczPX8 00:31:12.771 14:13:52 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NJb7nczPX8 00:31:12.771 14:13:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NJb7nczPX8 00:31:12.771 14:13:52 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.NJb7nczPX8 00:31:12.771 14:13:52 -- keyring/file.sh@88 -- # get_refcnt key0 00:31:12.771 14:13:52 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:12.771 14:13:52 -- keyring/common.sh@12 -- # get_key key0 00:31:12.771 14:13:52 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:12.771 14:13:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:12.771 14:13:52 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:13.030 14:13:52 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:31:13.031 14:13:52 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:13.031 14:13:52 -- common/autotest_common.sh@638 -- # local es=0 00:31:13.031 14:13:52 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:13.031 14:13:52 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:31:13.031 14:13:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:13.031 14:13:52 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:31:13.031 14:13:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:31:13.031 14:13:52 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:13.031 14:13:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:13.289 [2024-04-26 14:13:52.845865] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.NJb7nczPX8': No such file or directory 00:31:13.289 [2024-04-26 14:13:52.845925] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:13.289 [2024-04-26 14:13:52.845955] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:13.289 [2024-04-26 14:13:52.845967] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:13.289 [2024-04-26 14:13:52.845980] bdev_nvme.c:6208:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:13.290 2024/04/26 14:13:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:31:13.290 request: 00:31:13.290 { 00:31:13.290 "method": "bdev_nvme_attach_controller", 00:31:13.290 "params": { 00:31:13.290 "name": "nvme0", 00:31:13.290 "trtype": "tcp", 00:31:13.290 "traddr": "127.0.0.1", 00:31:13.290 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:13.290 "adrfam": "ipv4", 00:31:13.290 "trsvcid": "4420", 00:31:13.290 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:13.290 "psk": "key0" 00:31:13.290 } 00:31:13.290 } 00:31:13.290 Got JSON-RPC error response 00:31:13.290 GoRPCClient: error on JSON-RPC call 00:31:13.290 14:13:52 -- common/autotest_common.sh@641 -- # es=1 00:31:13.290 14:13:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:31:13.290 14:13:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:31:13.290 14:13:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:31:13.290 14:13:52 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:31:13.290 14:13:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:13.547 14:13:53 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:13.547 14:13:53 -- keyring/common.sh@15 -- # local name key digest path 00:31:13.547 14:13:53 -- keyring/common.sh@17 -- # name=key0 00:31:13.547 14:13:53 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:13.547 14:13:53 -- keyring/common.sh@17 -- # digest=0 00:31:13.547 14:13:53 -- keyring/common.sh@18 -- # mktemp 00:31:13.547 14:13:53 -- keyring/common.sh@18 -- # path=/tmp/tmp.WQZFaj0bmf 00:31:13.547 14:13:53 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:13.547 14:13:53 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:13.547 14:13:53 -- nvmf/common.sh@691 -- # local prefix key digest 00:31:13.547 14:13:53 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:31:13.547 14:13:53 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:31:13.547 14:13:53 -- nvmf/common.sh@693 -- # digest=0 00:31:13.547 14:13:53 -- nvmf/common.sh@694 -- # python - 00:31:13.547 14:13:53 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WQZFaj0bmf 00:31:13.547 14:13:53 -- keyring/common.sh@23 -- # echo /tmp/tmp.WQZFaj0bmf 00:31:13.547 14:13:53 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.WQZFaj0bmf 00:31:13.547 14:13:53 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WQZFaj0bmf 00:31:13.547 14:13:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WQZFaj0bmf 00:31:13.805 14:13:53 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:13.805 14:13:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:14.063 nvme0n1 00:31:14.063 14:13:53 -- keyring/file.sh@99 -- # get_refcnt key0 00:31:14.063 14:13:53 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:14.063 14:13:53 -- keyring/common.sh@12 -- # get_key key0 00:31:14.063 14:13:53 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:14.063 14:13:53 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:14.063 14:13:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:14.322 14:13:53 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:14.322 14:13:53 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:14.322 14:13:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:14.581 14:13:54 -- keyring/file.sh@101 -- # get_key key0 00:31:14.581 14:13:54 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:14.581 14:13:54 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:14.581 14:13:54 -- keyring/file.sh@101 -- # jq -r .removed 00:31:14.581 14:13:54 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:14.581 14:13:54 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:14.581 14:13:54 -- keyring/file.sh@102 -- # get_refcnt key0 00:31:14.581 14:13:54 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:14.581 14:13:54 -- keyring/common.sh@12 -- # get_key key0 00:31:14.581 14:13:54 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:14.581 14:13:54 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:14.581 14:13:54 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:14.839 14:13:54 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:14.839 14:13:54 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:14.839 14:13:54 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:15.098 14:13:54 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:15.098 14:13:54 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:15.098 14:13:54 -- keyring/file.sh@104 -- # jq length 00:31:15.356 14:13:54 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:15.356 14:13:54 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WQZFaj0bmf 00:31:15.356 14:13:54 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WQZFaj0bmf 00:31:15.614 14:13:55 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.DGD7rmw0DI 00:31:15.614 14:13:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.DGD7rmw0DI 00:31:15.614 14:13:55 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:15.614 14:13:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:15.872 nvme0n1 00:31:15.872 14:13:55 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:15.872 14:13:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:16.442 14:13:55 -- keyring/file.sh@112 -- # config='{ 00:31:16.442 "subsystems": [ 00:31:16.442 { 00:31:16.442 "subsystem": "keyring", 00:31:16.442 "config": [ 00:31:16.442 { 00:31:16.442 "method": "keyring_file_add_key", 00:31:16.442 "params": { 00:31:16.442 "name": "key0", 00:31:16.442 "path": "/tmp/tmp.WQZFaj0bmf" 00:31:16.442 } 00:31:16.442 }, 00:31:16.442 { 00:31:16.442 "method": "keyring_file_add_key", 00:31:16.442 "params": { 00:31:16.442 "name": "key1", 00:31:16.442 "path": "/tmp/tmp.DGD7rmw0DI" 00:31:16.442 } 00:31:16.442 } 00:31:16.442 ] 00:31:16.442 }, 00:31:16.442 { 00:31:16.442 "subsystem": "iobuf", 00:31:16.442 "config": [ 00:31:16.442 { 00:31:16.442 "method": "iobuf_set_options", 00:31:16.442 "params": { 00:31:16.442 "large_bufsize": 135168, 00:31:16.442 "large_pool_count": 1024, 00:31:16.442 "small_bufsize": 8192, 00:31:16.442 "small_pool_count": 8192 00:31:16.442 } 00:31:16.442 } 00:31:16.442 ] 00:31:16.442 }, 00:31:16.442 { 00:31:16.442 "subsystem": "sock", 00:31:16.442 "config": [ 00:31:16.442 { 00:31:16.442 "method": "sock_impl_set_options", 00:31:16.442 "params": { 00:31:16.442 "enable_ktls": false, 00:31:16.442 "enable_placement_id": 0, 00:31:16.442 "enable_quickack": false, 00:31:16.442 "enable_recv_pipe": true, 00:31:16.442 "enable_zerocopy_send_client": false, 00:31:16.442 "enable_zerocopy_send_server": true, 00:31:16.442 "impl_name": "posix", 00:31:16.442 "recv_buf_size": 2097152, 00:31:16.442 "send_buf_size": 2097152, 00:31:16.442 "tls_version": 0, 00:31:16.442 "zerocopy_threshold": 0 00:31:16.442 } 00:31:16.442 }, 00:31:16.442 { 00:31:16.442 "method": "sock_impl_set_options", 00:31:16.442 "params": { 00:31:16.442 "enable_ktls": false, 00:31:16.442 "enable_placement_id": 0, 00:31:16.442 "enable_quickack": false, 00:31:16.442 "enable_recv_pipe": true, 00:31:16.442 "enable_zerocopy_send_client": false, 00:31:16.442 "enable_zerocopy_send_server": true, 00:31:16.442 "impl_name": "ssl", 00:31:16.442 "recv_buf_size": 4096, 00:31:16.442 "send_buf_size": 4096, 00:31:16.442 "tls_version": 0, 00:31:16.442 "zerocopy_threshold": 0 00:31:16.442 } 00:31:16.442 } 00:31:16.442 ] 00:31:16.442 }, 00:31:16.442 { 00:31:16.442 "subsystem": "vmd", 00:31:16.442 "config": [] 00:31:16.442 }, 00:31:16.442 { 00:31:16.442 "subsystem": "accel", 00:31:16.442 "config": [ 00:31:16.442 { 00:31:16.442 "method": "accel_set_options", 00:31:16.442 "params": { 00:31:16.442 "buf_count": 2048, 00:31:16.442 "large_cache_size": 16, 00:31:16.442 "sequence_count": 2048, 00:31:16.442 "small_cache_size": 128, 00:31:16.442 "task_count": 2048 00:31:16.442 } 00:31:16.442 } 00:31:16.442 ] 00:31:16.442 }, 00:31:16.442 { 00:31:16.442 "subsystem": "bdev", 00:31:16.442 "config": [ 00:31:16.442 { 00:31:16.442 "method": "bdev_set_options", 00:31:16.442 "params": { 00:31:16.442 "bdev_auto_examine": true, 00:31:16.442 "bdev_io_cache_size": 256, 00:31:16.442 "bdev_io_pool_size": 65535, 00:31:16.442 "iobuf_large_cache_size": 16, 00:31:16.442 "iobuf_small_cache_size": 128 00:31:16.442 } 00:31:16.442 }, 00:31:16.442 { 00:31:16.442 "method": "bdev_raid_set_options", 00:31:16.442 "params": { 00:31:16.442 "process_window_size_kb": 1024 00:31:16.442 } 00:31:16.442 }, 00:31:16.442 { 00:31:16.442 "method": "bdev_iscsi_set_options", 00:31:16.442 "params": { 00:31:16.442 "timeout_sec": 30 00:31:16.442 } 00:31:16.442 }, 00:31:16.442 { 00:31:16.442 "method": "bdev_nvme_set_options", 00:31:16.442 "params": { 00:31:16.442 "action_on_timeout": "none", 00:31:16.442 "allow_accel_sequence": false, 00:31:16.442 "arbitration_burst": 0, 00:31:16.442 "bdev_retry_count": 3, 00:31:16.442 "ctrlr_loss_timeout_sec": 0, 00:31:16.442 "delay_cmd_submit": true, 00:31:16.442 "dhchap_dhgroups": [ 00:31:16.442 "null", 00:31:16.442 "ffdhe2048", 00:31:16.442 "ffdhe3072", 00:31:16.442 "ffdhe4096", 00:31:16.442 "ffdhe6144", 00:31:16.442 "ffdhe8192" 00:31:16.442 ], 00:31:16.442 "dhchap_digests": [ 00:31:16.442 "sha256", 00:31:16.442 "sha384", 00:31:16.442 "sha512" 00:31:16.442 ], 00:31:16.442 "disable_auto_failback": false, 00:31:16.442 "fast_io_fail_timeout_sec": 0, 00:31:16.442 "generate_uuids": false, 00:31:16.442 "high_priority_weight": 0, 00:31:16.442 "io_path_stat": false, 00:31:16.442 "io_queue_requests": 512, 00:31:16.442 "keep_alive_timeout_ms": 10000, 00:31:16.442 "low_priority_weight": 0, 00:31:16.442 "medium_priority_weight": 0, 00:31:16.442 "nvme_adminq_poll_period_us": 10000, 00:31:16.442 "nvme_error_stat": false, 00:31:16.442 "nvme_ioq_poll_period_us": 0, 00:31:16.442 "rdma_cm_event_timeout_ms": 0, 00:31:16.442 "rdma_max_cq_size": 0, 00:31:16.442 "rdma_srq_size": 0, 00:31:16.442 "reconnect_delay_sec": 0, 00:31:16.442 "timeout_admin_us": 0, 00:31:16.442 "timeout_us": 0, 00:31:16.442 "transport_ack_timeout": 0, 00:31:16.442 "transport_retry_count": 4, 00:31:16.442 "transport_tos": 0 00:31:16.442 } 00:31:16.442 }, 00:31:16.442 { 00:31:16.442 "method": "bdev_nvme_attach_controller", 00:31:16.442 "params": { 00:31:16.442 "adrfam": "IPv4", 00:31:16.442 "ctrlr_loss_timeout_sec": 0, 00:31:16.442 "ddgst": false, 00:31:16.442 "fast_io_fail_timeout_sec": 0, 00:31:16.442 "hdgst": false, 00:31:16.442 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:16.442 "name": "nvme0", 00:31:16.442 "prchk_guard": false, 00:31:16.442 "prchk_reftag": false, 00:31:16.442 "psk": "key0", 00:31:16.442 "reconnect_delay_sec": 0, 00:31:16.442 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:16.442 "traddr": "127.0.0.1", 00:31:16.442 "trsvcid": "4420", 00:31:16.442 "trtype": "TCP" 00:31:16.442 } 00:31:16.442 }, 00:31:16.442 { 00:31:16.442 "method": "bdev_nvme_set_hotplug", 00:31:16.442 "params": { 00:31:16.442 "enable": false, 00:31:16.442 "period_us": 100000 00:31:16.442 } 00:31:16.442 }, 00:31:16.443 { 00:31:16.443 "method": "bdev_wait_for_examine" 00:31:16.443 } 00:31:16.443 ] 00:31:16.443 }, 00:31:16.443 { 00:31:16.443 "subsystem": "nbd", 00:31:16.443 "config": [] 00:31:16.443 } 00:31:16.443 ] 00:31:16.443 }' 00:31:16.443 14:13:55 -- keyring/file.sh@114 -- # killprocess 95327 00:31:16.443 14:13:55 -- common/autotest_common.sh@936 -- # '[' -z 95327 ']' 00:31:16.443 14:13:55 -- common/autotest_common.sh@940 -- # kill -0 95327 00:31:16.443 14:13:55 -- common/autotest_common.sh@941 -- # uname 00:31:16.443 14:13:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:16.443 14:13:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95327 00:31:16.443 killing process with pid 95327 00:31:16.443 Received shutdown signal, test time was about 1.000000 seconds 00:31:16.443 00:31:16.443 Latency(us) 00:31:16.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:16.443 =================================================================================================================== 00:31:16.443 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:16.443 14:13:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:16.443 14:13:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:16.443 14:13:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95327' 00:31:16.443 14:13:55 -- common/autotest_common.sh@955 -- # kill 95327 00:31:16.443 14:13:55 -- common/autotest_common.sh@960 -- # wait 95327 00:31:17.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:17.381 14:13:57 -- keyring/file.sh@117 -- # bperfpid=95790 00:31:17.381 14:13:57 -- keyring/file.sh@119 -- # waitforlisten 95790 /var/tmp/bperf.sock 00:31:17.381 14:13:57 -- common/autotest_common.sh@817 -- # '[' -z 95790 ']' 00:31:17.381 14:13:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:17.381 14:13:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:17.381 14:13:57 -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:17.381 14:13:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:17.381 14:13:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:17.381 14:13:57 -- common/autotest_common.sh@10 -- # set +x 00:31:17.381 14:13:57 -- keyring/file.sh@115 -- # echo '{ 00:31:17.381 "subsystems": [ 00:31:17.381 { 00:31:17.381 "subsystem": "keyring", 00:31:17.381 "config": [ 00:31:17.381 { 00:31:17.381 "method": "keyring_file_add_key", 00:31:17.381 "params": { 00:31:17.381 "name": "key0", 00:31:17.381 "path": "/tmp/tmp.WQZFaj0bmf" 00:31:17.381 } 00:31:17.381 }, 00:31:17.381 { 00:31:17.381 "method": "keyring_file_add_key", 00:31:17.381 "params": { 00:31:17.381 "name": "key1", 00:31:17.381 "path": "/tmp/tmp.DGD7rmw0DI" 00:31:17.381 } 00:31:17.381 } 00:31:17.381 ] 00:31:17.381 }, 00:31:17.381 { 00:31:17.381 "subsystem": "iobuf", 00:31:17.381 "config": [ 00:31:17.381 { 00:31:17.381 "method": "iobuf_set_options", 00:31:17.381 "params": { 00:31:17.381 "large_bufsize": 135168, 00:31:17.381 "large_pool_count": 1024, 00:31:17.381 "small_bufsize": 8192, 00:31:17.381 "small_pool_count": 8192 00:31:17.381 } 00:31:17.381 } 00:31:17.381 ] 00:31:17.381 }, 00:31:17.381 { 00:31:17.381 "subsystem": "sock", 00:31:17.381 "config": [ 00:31:17.381 { 00:31:17.381 "method": "sock_impl_set_options", 00:31:17.381 "params": { 00:31:17.381 "enable_ktls": false, 00:31:17.381 "enable_placement_id": 0, 00:31:17.381 "enable_quickack": false, 00:31:17.381 "enable_recv_pipe": true, 00:31:17.381 "enable_zerocopy_send_client": false, 00:31:17.381 "enable_zerocopy_send_server": true, 00:31:17.381 "impl_name": "posix", 00:31:17.381 "recv_buf_size": 2097152, 00:31:17.381 "send_buf_size": 2097152, 00:31:17.381 "tls_version": 0, 00:31:17.381 "zerocopy_threshold": 0 00:31:17.381 } 00:31:17.381 }, 00:31:17.381 { 00:31:17.381 "method": "sock_impl_set_options", 00:31:17.381 "params": { 00:31:17.381 "enable_ktls": false, 00:31:17.381 "enable_placement_id": 0, 00:31:17.381 "enable_quickack": false, 00:31:17.381 "enable_recv_pipe": true, 00:31:17.381 "enable_zerocopy_send_client": false, 00:31:17.381 "enable_zerocopy_send_server": true, 00:31:17.381 "impl_name": "ssl", 00:31:17.381 "recv_buf_size": 4096, 00:31:17.381 "send_buf_size": 4096, 00:31:17.381 "tls_version": 0, 00:31:17.381 "zerocopy_threshold": 0 00:31:17.381 } 00:31:17.381 } 00:31:17.381 ] 00:31:17.381 }, 00:31:17.381 { 00:31:17.381 "subsystem": "vmd", 00:31:17.381 "config": [] 00:31:17.381 }, 00:31:17.381 { 00:31:17.381 "subsystem": "accel", 00:31:17.381 "config": [ 00:31:17.381 { 00:31:17.381 "method": "accel_set_options", 00:31:17.381 "params": { 00:31:17.381 "buf_count": 2048, 00:31:17.381 "large_cache_size": 16, 00:31:17.381 "sequence_count": 2048, 00:31:17.381 "small_cache_size": 128, 00:31:17.381 "task_count": 2048 00:31:17.381 } 00:31:17.381 } 00:31:17.381 ] 00:31:17.381 }, 00:31:17.381 { 00:31:17.381 "subsystem": "bdev", 00:31:17.381 "config": [ 00:31:17.381 { 00:31:17.381 "method": "bdev_set_options", 00:31:17.381 "params": { 00:31:17.381 "bdev_auto_examine": true, 00:31:17.381 "bdev_io_cache_size": 256, 00:31:17.381 "bdev_io_pool_size": 65535, 00:31:17.381 "iobuf_large_cache_size": 16, 00:31:17.381 "iobuf_small_cache_size": 128 00:31:17.381 } 00:31:17.381 }, 00:31:17.381 { 00:31:17.381 "method": "bdev_raid_set_options", 00:31:17.381 "params": { 00:31:17.381 "process_window_size_kb": 1024 00:31:17.381 } 00:31:17.381 }, 00:31:17.381 { 00:31:17.381 "method": "bdev_iscsi_set_options", 00:31:17.381 "params": { 00:31:17.381 "timeout_sec": 30 00:31:17.381 } 00:31:17.381 }, 00:31:17.381 { 00:31:17.381 "method": "bdev_nvme_set_options", 00:31:17.381 "params": { 00:31:17.381 "action_on_timeout": "none", 00:31:17.381 "allow_accel_sequence": false, 00:31:17.381 "arbitration_burst": 0, 00:31:17.381 "bdev_retry_count": 3, 00:31:17.381 "ctrlr_loss_timeout_sec": 0, 00:31:17.381 "delay_cmd_submit": true, 00:31:17.381 "dhchap_dhgroups": [ 00:31:17.381 "null", 00:31:17.381 "ffdhe2048", 00:31:17.381 "ffdhe3072", 00:31:17.381 "ffdhe4096", 00:31:17.381 "ffdhe6144", 00:31:17.381 "ffdhe8192" 00:31:17.381 ], 00:31:17.381 "dhchap_digests": [ 00:31:17.381 "sha256", 00:31:17.381 "sha384", 00:31:17.381 "sha512" 00:31:17.381 ], 00:31:17.381 "disable_auto_failback": false, 00:31:17.381 "fast_io_fail_timeout_sec": 0, 00:31:17.381 "generate_uuids": false, 00:31:17.381 "high_priority_weight": 0, 00:31:17.381 "io_path_stat": false, 00:31:17.381 "io_queue_requests": 512, 00:31:17.381 "keep_alive_timeout_ms": 10000, 00:31:17.381 "low_priority_weight": 0, 00:31:17.381 "medium_priority_weight": 0, 00:31:17.381 "nvme_adminq_poll_period_us": 10000, 00:31:17.381 "nvme_error_stat": false, 00:31:17.381 "nvme_ioq_poll_period_us": 0, 00:31:17.381 "rdma_cm_event_timeout_ms": 0, 00:31:17.381 "rdma_max_cq_size": 0, 00:31:17.381 "rdma_srq_size": 0, 00:31:17.381 "reconnect_delay_sec": 0, 00:31:17.381 "timeout_admin_us": 0, 00:31:17.381 "timeout_us": 0, 00:31:17.381 "transport_ack_timeout": 0, 00:31:17.381 "transport_retry_count": 4, 00:31:17.381 "transport_tos": 0 00:31:17.381 } 00:31:17.381 }, 00:31:17.381 { 00:31:17.381 "method": "bdev_nvme_attach_controller", 00:31:17.381 "params": { 00:31:17.381 "adrfam": "IPv4", 00:31:17.381 "ctrlr_loss_timeout_sec": 0, 00:31:17.381 "ddgst": false, 00:31:17.381 "fast_io_fail_timeout_sec": 0, 00:31:17.381 "hdgst": false, 00:31:17.381 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:17.381 "name": "nvme0", 00:31:17.381 "prchk_guard": false, 00:31:17.381 "prchk_reftag": false, 00:31:17.381 "psk": "key0", 00:31:17.381 "reconnect_delay_sec": 0, 00:31:17.381 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:17.381 "traddr": "127.0.0.1", 00:31:17.381 "trsvcid": "4420", 00:31:17.381 "trtype": "TCP" 00:31:17.381 } 00:31:17.381 }, 00:31:17.381 { 00:31:17.381 "method": "bdev_nvme_set_hotplug", 00:31:17.381 "params": { 00:31:17.381 "enable": false, 00:31:17.381 "period_us": 100000 00:31:17.381 } 00:31:17.381 }, 00:31:17.381 { 00:31:17.381 "method": "bdev_wait_for_examine" 00:31:17.381 } 00:31:17.381 ] 00:31:17.381 }, 00:31:17.381 { 00:31:17.381 "subsystem": "nbd", 00:31:17.381 "config": [] 00:31:17.381 } 00:31:17.381 ] 00:31:17.381 }' 00:31:17.640 [2024-04-26 14:13:57.134918] Starting SPDK v24.05-pre git sha1 8571999d8 / DPDK 23.11.0 initialization... 00:31:17.640 [2024-04-26 14:13:57.135264] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95790 ] 00:31:17.640 [2024-04-26 14:13:57.307111] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.899 [2024-04-26 14:13:57.542888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.465 [2024-04-26 14:13:57.998268] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:18.465 14:13:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:18.465 14:13:58 -- common/autotest_common.sh@850 -- # return 0 00:31:18.465 14:13:58 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:18.465 14:13:58 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:18.465 14:13:58 -- keyring/file.sh@120 -- # jq length 00:31:18.776 14:13:58 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:18.776 14:13:58 -- keyring/file.sh@121 -- # get_refcnt key0 00:31:18.776 14:13:58 -- keyring/common.sh@12 -- # get_key key0 00:31:18.776 14:13:58 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:18.776 14:13:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:18.776 14:13:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:18.776 14:13:58 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:19.076 14:13:58 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:19.076 14:13:58 -- keyring/file.sh@122 -- # get_refcnt key1 00:31:19.076 14:13:58 -- keyring/common.sh@12 -- # get_key key1 00:31:19.076 14:13:58 -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:19.076 14:13:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:19.076 14:13:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:19.076 14:13:58 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:19.335 14:13:58 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:19.335 14:13:58 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:19.335 14:13:58 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:19.335 14:13:58 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:19.595 14:13:59 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:19.595 14:13:59 -- keyring/file.sh@1 -- # cleanup 00:31:19.595 14:13:59 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.WQZFaj0bmf /tmp/tmp.DGD7rmw0DI 00:31:19.595 14:13:59 -- keyring/file.sh@20 -- # killprocess 95790 00:31:19.595 14:13:59 -- common/autotest_common.sh@936 -- # '[' -z 95790 ']' 00:31:19.595 14:13:59 -- common/autotest_common.sh@940 -- # kill -0 95790 00:31:19.595 14:13:59 -- common/autotest_common.sh@941 -- # uname 00:31:19.595 14:13:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:19.595 14:13:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95790 00:31:19.595 killing process with pid 95790 00:31:19.595 Received shutdown signal, test time was about 1.000000 seconds 00:31:19.595 00:31:19.595 Latency(us) 00:31:19.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:19.595 =================================================================================================================== 00:31:19.595 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:19.595 14:13:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:31:19.595 14:13:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:31:19.595 14:13:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95790' 00:31:19.595 14:13:59 -- common/autotest_common.sh@955 -- # kill 95790 00:31:19.595 14:13:59 -- common/autotest_common.sh@960 -- # wait 95790 00:31:20.971 14:14:00 -- keyring/file.sh@21 -- # killprocess 95286 00:31:20.971 14:14:00 -- common/autotest_common.sh@936 -- # '[' -z 95286 ']' 00:31:20.971 14:14:00 -- common/autotest_common.sh@940 -- # kill -0 95286 00:31:20.971 14:14:00 -- common/autotest_common.sh@941 -- # uname 00:31:20.971 14:14:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:20.971 14:14:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 95286 00:31:20.971 killing process with pid 95286 00:31:20.971 14:14:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:20.971 14:14:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:20.971 14:14:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 95286' 00:31:20.971 14:14:00 -- common/autotest_common.sh@955 -- # kill 95286 00:31:20.971 [2024-04-26 14:14:00.453003] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:20.971 14:14:00 -- common/autotest_common.sh@960 -- # wait 95286 00:31:23.506 00:31:23.506 real 0m18.925s 00:31:23.506 user 0m40.185s 00:31:23.506 sys 0m3.928s 00:31:23.506 14:14:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:23.506 ************************************ 00:31:23.506 END TEST keyring_file 00:31:23.506 ************************************ 00:31:23.506 14:14:02 -- common/autotest_common.sh@10 -- # set +x 00:31:23.506 14:14:02 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:31:23.506 14:14:02 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:31:23.506 14:14:02 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:31:23.506 14:14:02 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:31:23.506 14:14:02 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:31:23.506 14:14:02 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:31:23.506 14:14:02 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:31:23.506 14:14:02 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:31:23.506 14:14:02 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:31:23.506 14:14:02 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:31:23.506 14:14:02 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:23.506 14:14:02 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:31:23.506 14:14:02 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:31:23.506 14:14:02 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:31:23.506 14:14:02 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:31:23.506 14:14:02 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:31:23.506 14:14:02 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:31:23.506 14:14:02 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:31:23.506 14:14:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:23.506 14:14:02 -- common/autotest_common.sh@10 -- # set +x 00:31:23.506 14:14:02 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:31:23.506 14:14:02 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:31:23.506 14:14:02 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:31:23.506 14:14:02 -- common/autotest_common.sh@10 -- # set +x 00:31:26.058 INFO: APP EXITING 00:31:26.058 INFO: killing all VMs 00:31:26.058 INFO: killing vhost app 00:31:26.058 INFO: EXIT DONE 00:31:26.317 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:26.317 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:26.575 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:27.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:27.399 Cleaning 00:31:27.399 Removing: /var/run/dpdk/spdk0/config 00:31:27.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:27.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:27.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:27.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:27.399 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:27.399 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:27.399 Removing: /var/run/dpdk/spdk1/config 00:31:27.399 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:27.399 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:27.399 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:27.399 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:27.399 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:27.399 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:27.399 Removing: /var/run/dpdk/spdk2/config 00:31:27.399 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:27.399 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:27.399 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:27.399 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:27.399 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:27.399 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:27.399 Removing: /var/run/dpdk/spdk3/config 00:31:27.399 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:27.399 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:27.399 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:27.399 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:27.399 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:27.399 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:27.399 Removing: /var/run/dpdk/spdk4/config 00:31:27.399 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:27.399 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:27.399 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:27.399 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:27.399 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:27.399 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:27.399 Removing: /dev/shm/nvmf_trace.0 00:31:27.399 Removing: /dev/shm/spdk_tgt_trace.pid60204 00:31:27.399 Removing: /var/run/dpdk/spdk0 00:31:27.399 Removing: /var/run/dpdk/spdk1 00:31:27.399 Removing: /var/run/dpdk/spdk2 00:31:27.399 Removing: /var/run/dpdk/spdk3 00:31:27.399 Removing: /var/run/dpdk/spdk4 00:31:27.399 Removing: /var/run/dpdk/spdk_pid59942 00:31:27.399 Removing: /var/run/dpdk/spdk_pid60204 00:31:27.399 Removing: /var/run/dpdk/spdk_pid60530 00:31:27.657 Removing: /var/run/dpdk/spdk_pid60649 00:31:27.657 Removing: /var/run/dpdk/spdk_pid60717 00:31:27.657 Removing: /var/run/dpdk/spdk_pid60860 00:31:27.657 Removing: /var/run/dpdk/spdk_pid60901 00:31:27.657 Removing: /var/run/dpdk/spdk_pid61064 00:31:27.657 Removing: /var/run/dpdk/spdk_pid61343 00:31:27.657 Removing: /var/run/dpdk/spdk_pid61536 00:31:27.657 Removing: /var/run/dpdk/spdk_pid61662 00:31:27.657 Removing: /var/run/dpdk/spdk_pid61783 00:31:27.657 Removing: /var/run/dpdk/spdk_pid61910 00:31:27.657 Removing: /var/run/dpdk/spdk_pid61960 00:31:27.657 Removing: /var/run/dpdk/spdk_pid62006 00:31:27.657 Removing: /var/run/dpdk/spdk_pid62080 00:31:27.657 Removing: /var/run/dpdk/spdk_pid62223 00:31:27.657 Removing: /var/run/dpdk/spdk_pid62872 00:31:27.657 Removing: /var/run/dpdk/spdk_pid62964 00:31:27.657 Removing: /var/run/dpdk/spdk_pid63071 00:31:27.657 Removing: /var/run/dpdk/spdk_pid63099 00:31:27.657 Removing: /var/run/dpdk/spdk_pid63269 00:31:27.657 Removing: /var/run/dpdk/spdk_pid63309 00:31:27.657 Removing: /var/run/dpdk/spdk_pid63473 00:31:27.657 Removing: /var/run/dpdk/spdk_pid63512 00:31:27.657 Removing: /var/run/dpdk/spdk_pid63586 00:31:27.657 Removing: /var/run/dpdk/spdk_pid63623 00:31:27.657 Removing: /var/run/dpdk/spdk_pid63702 00:31:27.657 Removing: /var/run/dpdk/spdk_pid63733 00:31:27.657 Removing: /var/run/dpdk/spdk_pid63958 00:31:27.657 Removing: /var/run/dpdk/spdk_pid64004 00:31:27.657 Removing: /var/run/dpdk/spdk_pid64090 00:31:27.657 Removing: /var/run/dpdk/spdk_pid64197 00:31:27.657 Removing: /var/run/dpdk/spdk_pid64239 00:31:27.657 Removing: /var/run/dpdk/spdk_pid64335 00:31:27.657 Removing: /var/run/dpdk/spdk_pid64387 00:31:27.657 Removing: /var/run/dpdk/spdk_pid64437 00:31:27.657 Removing: /var/run/dpdk/spdk_pid64492 00:31:27.657 Removing: /var/run/dpdk/spdk_pid64544 00:31:27.657 Removing: /var/run/dpdk/spdk_pid64600 00:31:27.657 Removing: /var/run/dpdk/spdk_pid64645 00:31:27.657 Removing: /var/run/dpdk/spdk_pid64701 00:31:27.657 Removing: /var/run/dpdk/spdk_pid64758 00:31:27.657 Removing: /var/run/dpdk/spdk_pid64809 00:31:27.657 Removing: /var/run/dpdk/spdk_pid64860 00:31:27.657 Removing: /var/run/dpdk/spdk_pid64916 00:31:27.658 Removing: /var/run/dpdk/spdk_pid64968 00:31:27.658 Removing: /var/run/dpdk/spdk_pid65024 00:31:27.658 Removing: /var/run/dpdk/spdk_pid65080 00:31:27.658 Removing: /var/run/dpdk/spdk_pid65125 00:31:27.658 Removing: /var/run/dpdk/spdk_pid65181 00:31:27.658 Removing: /var/run/dpdk/spdk_pid65240 00:31:27.658 Removing: /var/run/dpdk/spdk_pid65294 00:31:27.658 Removing: /var/run/dpdk/spdk_pid65344 00:31:27.658 Removing: /var/run/dpdk/spdk_pid65404 00:31:27.658 Removing: /var/run/dpdk/spdk_pid65497 00:31:27.658 Removing: /var/run/dpdk/spdk_pid65652 00:31:27.658 Removing: /var/run/dpdk/spdk_pid66101 00:31:27.658 Removing: /var/run/dpdk/spdk_pid69756 00:31:27.658 Removing: /var/run/dpdk/spdk_pid70112 00:31:27.658 Removing: /var/run/dpdk/spdk_pid71333 00:31:27.658 Removing: /var/run/dpdk/spdk_pid71732 00:31:27.658 Removing: /var/run/dpdk/spdk_pid72014 00:31:27.917 Removing: /var/run/dpdk/spdk_pid72061 00:31:27.917 Removing: /var/run/dpdk/spdk_pid72980 00:31:27.917 Removing: /var/run/dpdk/spdk_pid73034 00:31:27.917 Removing: /var/run/dpdk/spdk_pid73458 00:31:27.917 Removing: /var/run/dpdk/spdk_pid74006 00:31:27.917 Removing: /var/run/dpdk/spdk_pid74449 00:31:27.917 Removing: /var/run/dpdk/spdk_pid75472 00:31:27.917 Removing: /var/run/dpdk/spdk_pid76502 00:31:27.917 Removing: /var/run/dpdk/spdk_pid76636 00:31:27.917 Removing: /var/run/dpdk/spdk_pid76716 00:31:27.917 Removing: /var/run/dpdk/spdk_pid78254 00:31:27.917 Removing: /var/run/dpdk/spdk_pid78557 00:31:27.917 Removing: /var/run/dpdk/spdk_pid79035 00:31:27.917 Removing: /var/run/dpdk/spdk_pid79145 00:31:27.917 Removing: /var/run/dpdk/spdk_pid79309 00:31:27.917 Removing: /var/run/dpdk/spdk_pid79365 00:31:27.917 Removing: /var/run/dpdk/spdk_pid79424 00:31:27.917 Removing: /var/run/dpdk/spdk_pid79476 00:31:27.917 Removing: /var/run/dpdk/spdk_pid79665 00:31:27.917 Removing: /var/run/dpdk/spdk_pid79824 00:31:27.917 Removing: /var/run/dpdk/spdk_pid80125 00:31:27.917 Removing: /var/run/dpdk/spdk_pid80272 00:31:27.917 Removing: /var/run/dpdk/spdk_pid80552 00:31:27.917 Removing: /var/run/dpdk/spdk_pid80703 00:31:27.917 Removing: /var/run/dpdk/spdk_pid80858 00:31:27.917 Removing: /var/run/dpdk/spdk_pid81234 00:31:27.917 Removing: /var/run/dpdk/spdk_pid81682 00:31:27.917 Removing: /var/run/dpdk/spdk_pid82027 00:31:27.917 Removing: /var/run/dpdk/spdk_pid82571 00:31:27.917 Removing: /var/run/dpdk/spdk_pid82584 00:31:27.917 Removing: /var/run/dpdk/spdk_pid82957 00:31:27.917 Removing: /var/run/dpdk/spdk_pid82972 00:31:27.917 Removing: /var/run/dpdk/spdk_pid82987 00:31:27.917 Removing: /var/run/dpdk/spdk_pid83025 00:31:27.917 Removing: /var/run/dpdk/spdk_pid83031 00:31:27.917 Removing: /var/run/dpdk/spdk_pid83349 00:31:27.917 Removing: /var/run/dpdk/spdk_pid83395 00:31:27.917 Removing: /var/run/dpdk/spdk_pid83740 00:31:27.917 Removing: /var/run/dpdk/spdk_pid84008 00:31:27.917 Removing: /var/run/dpdk/spdk_pid84530 00:31:27.917 Removing: /var/run/dpdk/spdk_pid85085 00:31:27.917 Removing: /var/run/dpdk/spdk_pid85706 00:31:27.917 Removing: /var/run/dpdk/spdk_pid85713 00:31:27.917 Removing: /var/run/dpdk/spdk_pid87683 00:31:27.917 Removing: /var/run/dpdk/spdk_pid87780 00:31:27.917 Removing: /var/run/dpdk/spdk_pid87882 00:31:27.917 Removing: /var/run/dpdk/spdk_pid87980 00:31:27.917 Removing: /var/run/dpdk/spdk_pid88172 00:31:27.917 Removing: /var/run/dpdk/spdk_pid88269 00:31:27.917 Removing: /var/run/dpdk/spdk_pid88371 00:31:27.917 Removing: /var/run/dpdk/spdk_pid88463 00:31:27.917 Removing: /var/run/dpdk/spdk_pid88834 00:31:27.917 Removing: /var/run/dpdk/spdk_pid89556 00:31:27.917 Removing: /var/run/dpdk/spdk_pid90943 00:31:27.917 Removing: /var/run/dpdk/spdk_pid91156 00:31:27.917 Removing: /var/run/dpdk/spdk_pid91454 00:31:27.917 Removing: /var/run/dpdk/spdk_pid91784 00:31:27.917 Removing: /var/run/dpdk/spdk_pid92372 00:31:27.917 Removing: /var/run/dpdk/spdk_pid92383 00:31:27.917 Removing: /var/run/dpdk/spdk_pid92761 00:31:27.917 Removing: /var/run/dpdk/spdk_pid92932 00:31:27.917 Removing: /var/run/dpdk/spdk_pid93104 00:31:27.917 Removing: /var/run/dpdk/spdk_pid93210 00:31:27.917 Removing: /var/run/dpdk/spdk_pid93374 00:31:28.175 Removing: /var/run/dpdk/spdk_pid93499 00:31:28.175 Removing: /var/run/dpdk/spdk_pid94226 00:31:28.175 Removing: /var/run/dpdk/spdk_pid94262 00:31:28.175 Removing: /var/run/dpdk/spdk_pid94298 00:31:28.175 Removing: /var/run/dpdk/spdk_pid94778 00:31:28.175 Removing: /var/run/dpdk/spdk_pid94814 00:31:28.175 Removing: /var/run/dpdk/spdk_pid94849 00:31:28.175 Removing: /var/run/dpdk/spdk_pid95286 00:31:28.175 Removing: /var/run/dpdk/spdk_pid95327 00:31:28.175 Removing: /var/run/dpdk/spdk_pid95790 00:31:28.175 Clean 00:31:28.175 14:14:07 -- common/autotest_common.sh@1437 -- # return 0 00:31:28.175 14:14:07 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:31:28.175 14:14:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:28.175 14:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:28.175 14:14:07 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:31:28.175 14:14:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:28.175 14:14:07 -- common/autotest_common.sh@10 -- # set +x 00:31:28.433 14:14:07 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:28.433 14:14:07 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:28.433 14:14:07 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:28.433 14:14:07 -- spdk/autotest.sh@389 -- # hash lcov 00:31:28.433 14:14:07 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:28.433 14:14:07 -- spdk/autotest.sh@391 -- # hostname 00:31:28.433 14:14:07 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:28.433 geninfo: WARNING: invalid characters removed from testname! 00:31:55.016 14:14:32 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:55.951 14:14:35 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:58.483 14:14:37 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:01.041 14:14:40 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:02.986 14:14:42 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:04.911 14:14:44 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:07.481 14:14:46 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:07.481 14:14:46 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:07.481 14:14:46 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:07.481 14:14:46 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:07.481 14:14:46 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:07.481 14:14:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.481 14:14:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.481 14:14:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.481 14:14:46 -- paths/export.sh@5 -- $ export PATH 00:32:07.481 14:14:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:07.481 14:14:46 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:32:07.481 14:14:46 -- common/autobuild_common.sh@435 -- $ date +%s 00:32:07.481 14:14:46 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714140886.XXXXXX 00:32:07.481 14:14:46 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714140886.I1fj6M 00:32:07.481 14:14:46 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:32:07.481 14:14:46 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:32:07.481 14:14:46 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:32:07.481 14:14:46 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:32:07.481 14:14:46 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:32:07.481 14:14:46 -- common/autobuild_common.sh@451 -- $ get_config_params 00:32:07.481 14:14:46 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:32:07.481 14:14:46 -- common/autotest_common.sh@10 -- $ set +x 00:32:07.481 14:14:46 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-avahi --with-golang' 00:32:07.481 14:14:46 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:32:07.481 14:14:46 -- pm/common@17 -- $ local monitor 00:32:07.481 14:14:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:07.481 14:14:46 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=97487 00:32:07.481 14:14:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:07.481 14:14:46 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=97489 00:32:07.481 14:14:46 -- pm/common@26 -- $ sleep 1 00:32:07.482 14:14:46 -- pm/common@21 -- $ date +%s 00:32:07.482 14:14:46 -- pm/common@21 -- $ date +%s 00:32:07.482 14:14:46 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1714140886 00:32:07.482 14:14:46 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1714140886 00:32:07.482 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1714140886_collect-vmstat.pm.log 00:32:07.482 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1714140886_collect-cpu-load.pm.log 00:32:08.046 14:14:47 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:32:08.046 14:14:47 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:32:08.046 14:14:47 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:32:08.046 14:14:47 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:08.046 14:14:47 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:08.046 14:14:47 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:08.046 14:14:47 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:08.046 14:14:47 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:08.046 14:14:47 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:08.305 14:14:47 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:08.305 14:14:47 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:08.305 14:14:47 -- pm/common@30 -- $ signal_monitor_resources TERM 00:32:08.305 14:14:47 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:32:08.305 14:14:47 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:08.305 14:14:47 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:32:08.305 14:14:47 -- pm/common@45 -- $ pid=97494 00:32:08.305 14:14:47 -- pm/common@52 -- $ sudo kill -TERM 97494 00:32:08.305 14:14:47 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:08.305 14:14:47 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:32:08.305 14:14:47 -- pm/common@45 -- $ pid=97495 00:32:08.305 14:14:47 -- pm/common@52 -- $ sudo kill -TERM 97495 00:32:08.305 + [[ -n 5103 ]] 00:32:08.305 + sudo kill 5103 00:32:08.315 [Pipeline] } 00:32:08.336 [Pipeline] // timeout 00:32:08.342 [Pipeline] } 00:32:08.360 [Pipeline] // stage 00:32:08.365 [Pipeline] } 00:32:08.381 [Pipeline] // catchError 00:32:08.390 [Pipeline] stage 00:32:08.392 [Pipeline] { (Stop VM) 00:32:08.405 [Pipeline] sh 00:32:08.687 + vagrant halt 00:32:11.976 ==> default: Halting domain... 00:32:18.613 [Pipeline] sh 00:32:18.892 + vagrant destroy -f 00:32:22.183 ==> default: Removing domain... 00:32:22.194 [Pipeline] sh 00:32:22.475 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:32:22.484 [Pipeline] } 00:32:22.502 [Pipeline] // stage 00:32:22.506 [Pipeline] } 00:32:22.521 [Pipeline] // dir 00:32:22.526 [Pipeline] } 00:32:22.542 [Pipeline] // wrap 00:32:22.546 [Pipeline] } 00:32:22.560 [Pipeline] // catchError 00:32:22.566 [Pipeline] stage 00:32:22.568 [Pipeline] { (Epilogue) 00:32:22.579 [Pipeline] sh 00:32:22.868 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:28.181 [Pipeline] catchError 00:32:28.182 [Pipeline] { 00:32:28.195 [Pipeline] sh 00:32:28.475 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:28.475 Artifacts sizes are good 00:32:28.484 [Pipeline] } 00:32:28.498 [Pipeline] // catchError 00:32:28.508 [Pipeline] archiveArtifacts 00:32:28.514 Archiving artifacts 00:32:28.690 [Pipeline] cleanWs 00:32:28.702 [WS-CLEANUP] Deleting project workspace... 00:32:28.702 [WS-CLEANUP] Deferred wipeout is used... 00:32:28.708 [WS-CLEANUP] done 00:32:28.709 [Pipeline] } 00:32:28.729 [Pipeline] // stage 00:32:28.735 [Pipeline] } 00:32:28.751 [Pipeline] // node 00:32:28.756 [Pipeline] End of Pipeline 00:32:28.790 Finished: SUCCESS